HARMONIZING HUMAN AND ALGORITHMIC ASSESSMENT: LEGAL REFLECTIONS ON THE RIGHT TO EXPLAINABILITY IN EDUCATION
1 Italian National Research Council, Institute of Educational Technologies / University of Bologna, Alma Mater Studiorum (ITALY)
2 University of Bari (ITALY)
3 Italian National Research Council, Institute of Educational Technologies (ITALY)
About this paper:
Conference name: 17th International Conference on Education and New Learning Technologies
Dates: 30 June-2 July, 2025
Location: Palma, Spain
Abstract:
The increasing use of artificial intelligence in educational contexts has introduced new legal and ethical challenges related to the transparency of automated assessments. Central to this discussion is the concept of "explainability," emerged as the right to understand the logical processes underpinning algorithmic decisions that directly impact students. This right shares significant analogies with the established legal principle of justifying educational assessments, part of the broader requirement for motivation in administrative decisions. Both rights inherently demand transparency, accountability, and clarity in the decision-making process, enabling students to contest or comprehend decisions affecting them.
This paper examines the legal connection between the traditional right of students to receive explanations for their grades—as an expression of administrative transparency—and the emerging right to AI explainability in automated decision-making scenarios. It identifies points of convergence, such as the safeguarding of transparency, accountability, and due process. However, it also highlights notable divergences, primarily linked to the differing nature of the decision-maker: human educators, who exercise discretionary judgment informed by pedagogical experience, versus AI-based systems, which rely upon intricate, often opaque algorithmic logic and probabilistic methodologies inherently resistant to straightforward interpretability.
Further, the paper critically explores the legal consequences of recognizing either substantial equivalence or fundamental difference between human and algorithmic evaluation processes. If substantial equivalence is acknowledged, existing legal guarantees—such as obligations of motivation, transparency, and accountability—can readily extend to AI-based decision-making without extensive normative reforms, reinforcing student protection and facilitating legal actions. Under this scenario, judicial review could effectively restore balance in cases of unjustified or unfair assessments. Conversely, recognizing a fundamental difference would necessitate tailored legislative interventions, distinct transparency standards specific to algorithmic decision-making, and a reallocation of legal responsibility towards algorithm developers and deploying institutions rather than individual educators. In this scenario, judicial oversight mechanisms would require innovative redesign, giving rise to novel forms of judicial review equipped to evaluate algorithmic complexity and adjudicate disputes involving AI-based reasoning.
The paper highlights the critical need for legal scholarship to address these emerging complexities, maintaining coherence within educational and administrative law in response to evolving AI applications.Keywords:
AI, Education, Law, Explainability, Rights, Assessment.