DIGITAL LIBRARY
WHAT CAN BE LEARNED BY GRADING ANALYTICS?
Düsseldorf University of Applied Sciences (HSD) (GERMANY)
About this paper:
Appears in: INTED2018 Proceedings
Publication year: 2018
Pages: 3097-3106
ISBN: 978-84-697-9480-7
ISSN: 2340-1079
doi: 10.21125/inted.2018.0593
Conference name: 12th International Technology, Education and Development Conference
Dates: 5-7 March, 2018
Location: Valencia, Spain
Abstract:
Educational Data Mining or Big Data in higher education promises to induce better and more effective decision making in universities, to reveal student’s learning needs, to help learners and instructors recognize danger to learning success, and to support intelligent e-learning systems [Daniel, 2015, Romero and Ventura, 2010]. The bulk of the data being mined comes from digital footprints of students as they use online learning tools, learning management systems, or other digital systems in the classroom and elsewhere on the campus [Sclater et al., 2016, Heo et al., 2016, bin Mat et al., 2013]. This type of data is the raw material for Learning Analytics, the most prominent branch of Educational Data Mining. However, not all universities may want to introduce digital surveillance and observation of teaching and learning [Prinsloo and Slade, 2014, Weade and Evertson, 1991], and many countries put restrictions on data recording by their privacy laws.

In this article we look at useful applications of Grading Analytics, i.e. the analysis of data in a conventional student information system of a university [Dziuban et al., 2012]. The most important data for our work are the exam results and the grading history for all students. Therefore we prefer to use the term Grading Analytics, instead of Academic Analytics which is used by some authors with a similar meaning [Egan, 2017]. We describe the typical data structure of a student record in the student information system of a German university. In order to satisfy German privacy laws, we developed a number of measures for effective anonymization. With an example of real student data, we show that predicting a successful completion of a study program is possible, even if the individual exam results, i.e. the raw grading data, are not used. We compare different methods for predictive modeling and discuss the results in the context of previous work [Frochte and Bernst, 2016].

Our research in Grading Analytics poses the question what the different stakeholders in a university could/should really learn by using this technology on a regular basis. Many research projects have proven that advanced methods of data analysis can predict the success of students even at the time they enter a graduate program, up to an accuracy of about 90% [Bahadir, 2016]. Does this mean that in the freshers’ week we should tell the assumed failure candidates that they should consider studying somewhere else? Advocates of Learning Analytics argue that teachers can intervene if the students at risk are identified, i.e. the interaction between teachers and students could be guided by this information [den Bogaard and de Vries, 2017].

One major goal of our work is the generation of dynamic information that can help students to pass exams and eventually succeed in their study program. This may be information serving the purpose of critical self-reflection. Student awareness of risk can lead to changes in behavior [Sclater and Mullan, 2017], but this is just one possible objective. We also want to provide students with unbiased information on how they can maneuver toward success, exploiting all options within the regulations of their study program. As one basis for the generation of supporting information for student, we discuss automated analysis of grade histograms and cluster detection within student cohorts. This article presents concepts and preliminary development results.
Keywords:
Data Mining, Learning Analytics, Grading Analytics, Predictive Modeling, Student Information System.