DIGITAL LIBRARY
A METHODOLOGY TO ASSESS LEARNING PATTERNS IN ONLINE COURSES MEDIATED BY AN LMS
CRACS / INESCTEC & Universidade do Porto (PORTUGAL)
About this paper:
Appears in: EDULEARN20 Proceedings
Publication year: 2020
Pages: 7604-7608
ISBN: 978-84-09-17979-4
ISSN: 2340-1117
doi: 10.21125/edulearn.2020.1930
Conference name: 12th International Conference on Education and New Learning Technologies
Dates: 6-7 July, 2020
Location: Online Conference
Abstract:
The current pandemic due to the new coronavirus has forced many educators to quickly adapt their learning methodologies and to urge to distance education tools in an e-learning setting. Despite that urgency, students are already used to be evaluated using online platforms. In most scientific/technical courses in higher education institutions, some-to-all parts of the assessment are based on downloading-modifying-uploading files; on accessing slides and hand notes through learning management systems (LMS), or even by participating in online learning activities in a virtual group, with other colleagues. Most of the time, all these activities are mediated by an LMS, in which the interactions between the users and the system itself are stored in log files in the form of registered actions performed by the user, being the user the educator, or the student.

Recently, research has focused on analyzing that logged information and building datasets capable of feeding machine learning algorithms. In turn, these algorithms will be able to predict if students fail in a course [1,2,3]. The goal is clear: the possibility of correctly predicting a student’s grade while the semester is still ongoing allows for the possibility to create an alert system for eminent potential-failure situations.

Although the above-mentioned studies are based on data collected from a 3-year span, where it was also ensured that the teaching methods, evaluations, educators and the subjects were the same, they were only applied to a single course. However, by looking carefully at the features identified in these studies, we may assume the authors were careful to pick features capable of being extrapolated to other courses.

In this paper, we discuss the 20 features used to build the proposed decision trees and conclude that we can narrow them down to 6 that can be adapted and generalized, in order to be used in other courses in a more generalized way. We start by providing the context for the research line pursued, and the emergency to create such a system when most of the lectures and being given almost all the time at a distance. We, then, focus on the data preparation needed to create such features, which are derived in computations from the Moodle logs and with the use of thresholds to compute activities duration times [1]. Then, using the same data, we compute the possible derivations that keep the accuracy from decaying more than 5%. We proceed by computing the minimum set of features that keep the precision and recall of the model on the same level. Finally, we create an abstract model for each of the resulting features, to make it capable of being generalized to other courses.

In the end we test our model (of 6 features) in a different course, given in 2019, and show the predictions in the middle, and at 3/4, of its full length. Results prove that our methodology achieves an accuracy of more than 80%.

The created model is then able to generate a reliable predictive tool and an alert system capable of monitoring students’ activities and warning them if their current behavior will lead them to a probable failing grade. The warning may provide the time needed by students to change their approach to the course, while at the same time the teacher is also alerted to the existence of potential failing students and, therefore, can also put more time and resources into them.
Keywords:
Predicting potential failures, machine learning, learning analytics, learning management system.