Osaka Sangyo University (JAPAN)
About this paper:
Appears in: INTED2020 Proceedings
Publication year: 2020
Pages: 6882-6889
ISBN: 978-84-09-17939-8
ISSN: 2340-1079
doi: 10.21125/inted.2020.1828
Conference name: 14th International Technology, Education and Development Conference
Dates: 2-4 March, 2020
Location: Valencia, Spain
Today, a decline in the scholastic ability of students in Japanese university is growing into serious problem. One of the reason is that the size of class is too large for a teacher to grasp students’ degree of understanding and concentration.

Generally, teachers estimate students’ degree of understanding and concentration from some kinds of visual information such as face expression and posture by carefully observing students’ behavior during class. However, it is difficult for teachers to observe and analyze students’ visual information and estimate their degree of understanding and concentration during class.

According to our survey on 30 teachers from different universities in Japan we revealed that majority of the teachers check posture and facial expression to estimate students’ degree of understanding and concentration. Thus, we propose a method to help teachers to estimate students’ concentration ratio from external feature of students at a certain period of time. To realize it, we propose a method to identify students’ behaviors by measuring facial expressions and postures such as degree of eye opening, line of sight, and facial direction using Omron’s Human Vision Sensor.

As a first step of the study, we took video of students’ during class, and extracted and classified student behaviors as 11 typical behaviors each of which consisted of some facial information and postures. Then we made another survey on 11 university teachers to associate the typical behaviors with one of the two states: “concentrated” or “not concentrated”.

We developed the system to calculate a student’s concentration ratio. Firstly, the system identifies a student’s behavior and the state associated to the behavior at a certain point from the values measured by the sensor. The system continue the process once every second and calculate the number of time the system judged “concentrate” over all judgement as a concentration ratio. In this way, we can calculate concentration ratio based on teachers’ estimation from the values measured by the sensor device.

As a result of evaluation, the overall concordance rate between the system’s judgement and teacher’s estimation was 78%. We revealed that the system identified the behavior of “sitting straight and listening to the lecture” with accuracy of 95% while it identified “copying from the blackboard” only 31% of accuracy because many of students bend down their head deeply and at that moment it was difficult to obtain facial information.
Behavior analysis, human vision sensor, concentration ratio estimation.