DIGITAL LIBRARY
ETHICAL DIMENSIONS OF USING AI IN EDUCATION
"Ss Cyril and Methodius" University (MACEDONIA)
About this paper:
Appears in: INTED2026 Proceedings
Publication year: 2026
Article: 2385 (abstract only)
ISBN: 978-84-09-82385-7
ISSN: 2340-1079
doi: 10.21125/inted.2026.2385
Conference name: 20th International Technology, Education and Development Conference
Dates: 2-4 March, 2026
Location: Valencia, Spain
Abstract:
In recent years, we have witnessed a rapid development of artificial intelligence (AI) technologies and tools which can be applied in various fields, including education. AI has the potential to revolutionize teaching and improve the quality of education, and benefit students, teachers, and schools. The application of AI in education is also accompanied by certain drawbacks and risks. AI’s implementation in education must be accompanied by appropriate regulations and set within certain ethical frameworks.

This paper explores the ethical dimensions of applying AI in schools and universities. We need to be aware that the improper use of AI systems in education can lead to serious consequences for the quality of education. The paper argues that the development, introduction, and use of AI technologies in education should be subject to strict requirements regarding the protection of the privacy of every actor in education, safety, transparency, inclusiveness, impartiality, explainability, fairness, and accountability.

By using AI algorithms, systems, and applications in education, we face numerous challenges and ethical issues, such as: protecting students' fundamental human rights, safeguarding students' personal data and privacy, combating the digital divide, stereotypes, discrimination, and cyberbullying, promoting gender equality, respecting copyright and intellectual property rights, and similar issues.

A particular challenge is the ethical use of datasets used for training AI and their potential bias. Gender, social, and cultural bias and discrimination of various kinds in the application of AI in education can arise from already biased datasets that do not represent society in the best and most relevant way and reflect current discrimination in society.

Including people from different cultural backgrounds and with different perspectives on life in the development of AI can help avoid potential biases. In addition to impartiality, it is desirable that the data used and produced by AI applications in education be accessible, interoperable, and of high quality. It is very important that the use of AI be available to everyone under the same conditions. This will ensure equal access to education and learning, so that no one is left behind, especially people with disabilities.

The analysis also addresses the transparency and explainability of AI systems (the so-called black box). All actors in education should be aware of how AI functions, that is, how decisions are made, and there should be ongoing efforts to raise awareness among people about potential errors and biases of AI.

The paper particularly emphasizes the importance of honesty in the use of AI technologies and respect for copyright and intellectual property. Ensuring academic integrity is a major challenge, meaning the provision of standards and tools to determine academic honesty, as well as the adaptation of assessment methods and originality checks.

Due to their high-risk nature, AI technologies, as well as their introduction and use in education, should be under constant control and supervision. If computers and AI make decisions like humans, the question arises as to who will bear responsibility for bad decisions with negative consequences.

The control and supervision of the application of AI technologies must be carefully considered and implemented, as there is a risk that excessive regulation could slow down the development of AI.
Keywords:
Artificial intelligence, education, ethics.