DIGITAL LIBRARY
ASSESSMENT OF TEACHER-EDUCATION CANDIDATES: COMPARING REMOTE AND FACE-TO-FACE TESTING
1 Talpiot Academic College of Education (ISRAEL)
2 The MOFET Institute (ISRAEL)
About this paper:
Appears in: ICERI2021 Proceedings
Publication year: 2021
Page: 1227 (abstract only)
ISBN: 978-84-09-34549-6
ISSN: 2340-1095
doi: 10.21125/iceri.2021.0353
Conference name: 14th annual International Conference of Education, Research and Innovation
Dates: 8-9 November, 2021
Location: Online Conference
Abstract:
Research has shown that teaching quality is one of the most important factors in student achievement. Despite agreement that suitable personality characteristics are vital for success in teaching, studies show that the entrance criteria for most teacher education programs is based solely on cognitive ability such as matriculation scores, GPA, or psychometric entrance exams.

MESILA, an innovative screening system was introduced in 2015. It addresses the candidates' personality, tendencies, behaviors, values, motivations, expectations, and interpersonal abilities, in addition to academic achievement. MESILA is an assessment center, which includes interactive group dynamics exercises, a semi-structured interpersonal interview, peer ratings, and personality questionnaires. A report is produced for each candidate, which includes sub-grades in seven indices and a general suitability score for teaching.

The outbreak of Covid-19 in 2020, and the ensuring quarantines and social distancing, prevented interactive and interpersonal face-to-face tests. To continue assessing candidates for teacher-education studies, the MESILA team began remote screening, using ZOOM software. Most of the selection system remained the same as its face-to-face predecessor (e.g., personality questionnaires, simulations assessed by the evaluation team, and the interpersonal interview). The group-dynamic exercises and peer evaluations did not survive the changeover.

The present study describes the transition to online testing and examines the impact of this change, comparing the online test scores with the scores obtained from face-to-face testing conducted in previous years. The scores in both test modes were compared as well as subjective ratings by both the candidates and the evaluators.

Findings show similar statistical results from both methods, with only a slight difference in the means and variance of the candidates in one specific study program, for which higher mean scores were found for remote testing. It appears that the lack of face-to-face interaction and the cancellation of the group dynamic tests led the evaluators to be less confident of their evaluations and more conservative in their judgements. Their shying away from giving extreme scores lead to a slightly higher percent of candidates who successfully passed the selection battery.

Candidates' ratings showed a high rate of satisfaction for both modes of screening. Evaluators’ feedback showed the remote testing to provide adequate information. In addition, they rated the remote testing as much more convenient. Two thirds of the evaluators stated that in their opinion it is also possible to measure group dynamics by remote testing.

It appears that the ability to assess candidates and rate their suitability to teaching is related to the measures employed, the questions asked, and the evaluations of experienced evaluators and is not significantly influenced by the testing modes, which showed similar scores. It will be interesting to study the validity of both types of testing in the future to see the effect of remote testing on prediction of teacher effectiveness.
Keywords:
Teacher-education studies, remote testing, Covid-19, assessment, MESILA, computerized selection.