DESIGNING GDPR-COMPLIANT EXPLAINABLE AI FOR STUDENT ASSESSMENT: FROM LEGAL PRINCIPLES TO IMPLEMENTABLE CONTROLS
Portuguese Naval Academy / CINAV (PORTUGAL)
About this paper:
Conference name: 20th International Technology, Education and Development Conference
Dates: 2-4 March, 2026
Location: Valencia, Spain
Abstract:
The growing use of artificial intelligence (AI) in student assessment, early-warning systems, adaptive learning platforms and generative AI assistants is reshaping teaching and learning in higher education. These emerging technologies promise early detection of dropout risk, personalised feedback at scale and more inclusive forms of participation, but they also raise pressing questions about transparency, contestability and data protection. Under the General Data Protection Regulation (GDPR), students have rights to fairness, transparency and meaningful information about automated decisions, as well as the right to express their views, contest outcomes and seek human intervention. Yet universities and edtech providers still lack concrete guidance on how to translate these legal principles into explainable AI (XAI) mechanisms and organisational routines that are usable in real classrooms. This paper presents a design-oriented framework for building GDPR-compliant explainable AI systems in education, explicitly focused on moving from legal principles to implementable controls. The framework combines a technical-regulatory correspondence matrix with a set of XAI design patterns tailored to educational contexts, covering local and global explanation artefacts, meaningful user-facing notices, decision dossiers, consent and preference records, and human-in-the-loop review workflows that fit academic calendars and assessment practices. A conceptual case study of an AI-based student risk prediction tool, integrated into a learning management system, illustrates how explanation methods (for example, feature attribution and example-based explanations), logging strategies and review procedures can be embedded in a machine learning operations (MLOps) pipeline to support students' rights to information, contestation and human oversight. The paper analyses how these design choices affect students' trust in AI-supported assessment, perceptions of fairness and agency, and teachers' ability to interpret, challenge and improve model outputs over time. It also reflects on institutional governance issues, including the role of ethics committees, data protection officers and teaching and learning centres in supervising AI innovations on campus. By systematically linking GDPR obligations with concrete XAI mechanisms and governance processes in educational AI systems, the proposed framework helps higher-education institutions harness the impact of AI on education in a way that is not only technically effective, but also pedagogically transparent, legally robust and aligned with students' fundamental rights. Finally, the article argues that the same correspondence approach can be reused for other emerging AI-based educational technologies, such as automated grading tools, generative-AI writing assistants and conversational tutoring chatbots, providing a reusable blueprint for aligning rapid AI innovation in higher education with GDPR-compliant governance, sustainable risk management and long-term student wellbeing.Keywords:
Artificial Intelligence in Education (AIED), Explainable Artificial Intelligence (XAI), GDPR compliance, Student assessment, Learning analytics, Human-in-the-loop, Educational data governance, Trustworthy AI in higher education.