DIGITAL LIBRARY
AN EXAMPLE TOWARDS THE RESPONSIBLE USE OF AI FOR HIGHER EDUCATION: EXPLAINABLE STUDENT DROPOUT PREDICTIONS
AP University of Applied Sciences and Arts Antwerp (BELGIUM)
About this paper:
Appears in: INTED2023 Proceedings
Publication year: 2023
Pages: 4330-4336
ISBN: 978-84-09-49026-4
ISSN: 2340-1079
doi: 10.21125/inted.2023.1148
Conference name: 17th International Technology, Education and Development Conference
Dates: 6-8 March, 2023
Location: Valencia, Spain
Abstract:
The digital transformation is here to stay and is increasingly impacting the way we live. Organizations are moving (part of) their activities online and investing in digital platforms for their stakeholders to interact with. These online interactions leave data traces that can be leveraged with AI techniques to provide an optimal personalized experience. This can be done for example by personalized emails or website banners that only show the most relevant information or recommendations for the customer.

Education is no exception to this trend and accelerated by the recent pandemic, higher education institutions are increasingly moving the learning process from the classroom to an online or hybrid setting. This leads to an increased generation of data which provides opportunities by using AI to support and optimize the learning process.

The use of these AI techniques should however be done responsibly and with caution. Blindly applying these tools as a black box can lead to undesired consequences. Examples are legal risk assessment tools that exhibit a racist tendency, recruiting tools that have a clear preference against hiring females and tools that only work properly if you are a white male. Such biases can be especially problematic in an educational setting. To mitigate these issues, the field of explainable AI has grown strongly with the goal of opening the black box and providing an explanation of the underlying logic that is followed by AI algorithms. In general, there are two approaches to explain the predictions of an AI model: either the model is explainable-by-design which means the logic of the model can be interpreted by humans or the model is too complex to be fully understood by humans and one can use post-hoc models that provide an explanation for specific predictions. This last approach is also referred to as locally interpretable since for each prediction a separate explanation is generated, as opposed to the explainable-by-design models which are globally interpretable.

At AP University of Applied Sciences and Arts, we have developed and compared explainable AI algorithms for the prediction of dropout of first-year students. Many studies have shown that the first year is critical and an early identification of students at risk of dropping out allows to intervene and support these students. To optimally plan such an intervention that is triggered by a model prediction it is important to know why the model flagged a specific student as being at risk. For this goal we have compared two algorithms: one explainable-by-design model which is based on a logistic regression and a more complex model based on a random forest with the post-hoc SHAP method to explain the predictions. We find a trade-off between, on the one hand, a more complex model which is harder to explain but reaches a higher accuracy and a simpler model which is explainable by design but with a lower accuracy. A next step is to present these predictions in a dashboard such that student counselors can use them to specifically target students at risk.
Keywords:
Learning analytics, higher education, dropout prediction, explainable AI.