DIGITAL LIBRARY
EDUCATIONAL MULTIMODAL DATA MINING AND FUSION THROUGH KNOWLEDGE GRAPHS FOR TOPIC-RELATION EXTRACTION IN STUDY RECOMMENDATIONS
Universität Siegen (GERMANY)
About this paper:
Appears in: EDULEARN20 Proceedings
Publication year: 2020
Pages: 3696-3705
ISBN: 978-84-09-17979-4
ISSN: 2340-1117
doi: 10.21125/edulearn.2020.1020
Conference name: 12th International Conference on Education and New Learning Technologies
Dates: 6-7 July, 2020
Location: Online Conference
Abstract:
With the wide spread of online educational resources, the need has emerged to extract the interlinks and relations between them. Educational material interlinking not only supports the recommendation of relevant study topics and resources, but also has the potential to discover patterns and learning paths that students can follow and utilize. In order to achieve these goals, the content of the learning materials has to be well analyzed, understood and then connected to other similar contents. Since learning data is multimodal in nature, meaning that it includes textual, visual and auditory data types, the intelligent mining of those modalities is essential to enable the extraction of relations between contents, whether of the same data type, or from different types. Moreover, relation extraction methods play a major role in mining and discovering the hidden interlinks between the topics within learning materials.

In this paper, we propose a semantic search approach based on knowledge graphs and multimodal data mining. It represents multimodal learning-material content in an interconnected network, which in turn supports students with recommendations of similar topics to study. We handle textual and visual data modalities and fuse them in one mining algorithm. Textual content is handled with a domain-specific text mining approach, while the visual content is processed through an automatic annotation procedure, in order to generate sufficient descriptions of the visual content. Both data modalities are then fused in one knowledge graph. Knowledge graphs are promising means for linking multiple data types within one representation, where the extracted information from data can then be searched in order to form the base of a topic recommendation for students. We test our approach on a sample university-level course material, where relevant topics are discovered and recommended to students. Our test shows that the developed algorithm has a superior performance to normal monomodal mining algorithms, and has the ability to find relations between different parts of the learning materials, which are not found with traditional key-word based search method.
Keywords:
Multimodal Data Fusion, Text Mining, Image Annotation, Knowledge Graphs, Topic Relation Extraction.