USING EYE-TRACKING TECHNOLOGY TO PROVIDE ASSISTIVE SUPPORT IN A MIXED REALITY LEARNING SYSTEM
Leibniz University Hannover (GERMANY)
About this paper:
Conference name: 16th annual International Conference of Education, Research and Innovation
Dates: 13-15 November, 2023
Location: Seville, Spain
Abstract:
In a course focusing on the fundamentals of electrical engineering, eye-tracking recordings of students working on theoretical paper-based tasks have been made at our university. The tasks include different types of electrical engineering tasks, such as sorting tasks and estimation tasks. The students were asked to work on the tasks individually and wore mobile eye-trackers. A total of 75 students participated in the study. Of these, 55 data sets are usable for analysis.
The key questions are whether the gaze guidance of students who solve the tasks correctly differs from students who do not solve the tasks correctly, and if so, what differences exist as a significant feature to differentiate between the groups. Differentiation aims to provide targeted support options. In addition to the correct and incorrect solution, there is a third group, depending on the type of task, which is characterized by having solved the tasks almost correctly. To differentiate between the two/three groups, quantitative features typical of metrics from the field of eye-tracking, such as First Fixation Duration (FFD) and Time to First Fixation (TTFF), are initially used (cf. Holmqvist et al., p. 385).
Since classical metrics are insufficient as differentiation features for all tasks, symbol sequences composed of Areas of Interest (AOI) identifiers are also used. To compare the symbol sequences, on the one hand, the string distance method, such as the Levenshtein distance is used and on the other hand, a hidden Markov model (HMM) is trained on the symbol sequences.
Results
While for some tasks, it is sufficient to use the metrics to achieve about 80% correct classification, there are also tasks where more complex models such as an HMM or a decision tree, must be used to achieve more than 80% classification. On this data set, hit accuracies of 73% have been achieved with the HMM, provided the groups are only grouped into correct and incorrect processing.
Different metrics are shown to be significant depending on the task type. Metrics such as the TTFF and FFD are available shortly after task processing begins, so potential support options based on gaze guidance can be provided early.
The results of this study will be integrated into the mixed reality (MR) application ElMiR-Lab (Electronic Mixed Reality Lab), which is being developed explicitly for the course. The application presents a working environment that is very similar to the environment when the tasks are solved on paper. The difference is that the MR glasses on which the application runs also have eye trackers, and thus the previously determined metrics and models can be used on the MR glasses. This makes it possible, depending on the significance of the metric, that it can be determined comparatively early in the processing of the tasks whether a student will have problems with the tasking. In this case, support hints can be displayed as an option.
The significant metrics and the different models that can be used for differentiation are presented in the paper as a result of our study. Furthermore, the embedding of the results in an MR application is presented.
References:
[1] Holmqvist et al. Eye tracking: A comprehensive guide to methods and measures. 1. publ. in paperback. Oxford: Oxford University Press, 2015. ISBN: 9780198738596.Keywords:
Eye-Tracking, Mixed Reality, Learning System.