MULTI-MODALITIES IN CLASSROOM LEARNING ENVIRONMENTS
This paper will present initial findings from the second phase of a Horizon 2020 funded project, Managing Affective-learning Through Intelligent Atoms and Smart Interactions (MaTHiSiS). The project focusses on the use of different multi-modalities used as part of the project in classrooms across Europe. The MaTHiSiS learning vision is to develop an integrated learning platform, with re-usable learning components which will respond to the needs of future education in primary, secondary, special education schools, vocational environments and learning beyond the classroom. The system comprises learning graphs which attach individual learning goals to the system. Each learning graph is developed from a set of smart learning atoms designed to support learners to achieve progression. Cutting edge technologies are being used to identify the affect state of learners and ultimately improve engagement of learners.
Much research identifies how learners engage with learning platforms (c.f. Castillo-Merino & Sarradell-Lopez, 2014; Martin & Ertzberger, 2013; Kearney, Burden & Rai, 2015; Shamir & Baruch, 2012). Not only do e-learning platforms have the capability to engage learners, they provide a vehicle for authentic classroom and informal learning (Maich & Hall, 2016) enabling ubiquitous and seamless learning (Martin & Ertzberger, 2013) within a non-linear environment. When experiencing more enjoyable interaction learners become more confident and motivated to learn and become less anxious, especially those with learning disabilities or at risk of social exclusion (Shamir, Korat and Fellah, 2012).
Mello, Graesser & Picard (2007) identified the importance of understanding the affect state of learners who may experience emotions such as ‘confusion, frustration, irritation, anger, rage, or even despair’ resulting in disengaging with learning. The MaTHiSiS system will use a range of platform agents such as NAO robots and Kinects to measure multi-modalities that support the affect state: facial expression analysis and gaze estimation (Kucirkova et al, 2014), mobile device-based emotion recognition (Coutrix & Mandran, 2012), skeleton motion using depth sensors and speech recognition.
Data has been collected using multimodal learning analytics developed for the project, including annotated multimodal recordings of learners interacting with the system, facial expression data and position of the learner. In addition, interviews with teachers and learners, from mainstream education as well as learners with profound multiple learning difficulties and autism, have been carried out to measure engagement and achievement of learners. Findings from schools based in the United Kingdom, mainstream and special schools will be presented and challenges shared.