DIGITAL LIBRARY
BRAVE NEW WORLD: AI IN TEACHING AND LEARNING
Technical University of Cluj-Napoca (ROMANIA)
About this paper:
Appears in: ICERI2023 Proceedings
Publication year: 2023
Pages: 8706-8713
ISBN: 978-84-09-55942-8
ISSN: 2340-1095
doi: 10.21125/iceri.2023.2221
Conference name: 16th annual International Conference of Education, Research and Innovation
Dates: 13-15 November, 2023
Location: Seville, Spain
Abstract:
We exemplify how Large Language Models (LLMs) are used in both teaching and learning. We also discuss the AI incidents that have already occurred in the education domain, and we argue for the urgent need to introduce AI policies in universities in the context of the ongoing strategies to regulate AI. Our view is that each institution should have a policy for AI in teaching and learning. This is important from at least twofolds:
(ii) to raise awareness on the numerous educational tools that can both positively and negatively affect education.
(ii) to minimise the risk of AI incidents in education.

First, teachers (including teachers of AI) have difficulties keeping race and awareness with the numerous educational plugins built on top of LLMs. Differently from teachers, students are quick to install and use “educational plugins” aiming at increasing their grades. Moodle or Kahoot quizzes can be automatically generated for a given topic or course content with the help of LLMs, thus saving instructor’s time. Tools to automatically solve such quizzes are available and easy to install by students as browser plugins built on top of LLMs. Not being perfect is not an impediment for students of using them since not few are those students satisfied with an average degree. Slides can be automatically generated based on prompts and course content, thus helping teachers to fast update their slides. Students use this feature too: their slides have started to have a GPT-style as the content becomes too generic and lacking creativity. Some pedagogical instruments have already embedded LLMs: e.g. the think-pair-share strategy has been augmented ChatGPT in the loop: think-pair-chatGPT-pair- share.

Second, incidents of AI in education do exist, as https://incidentdatabase.ai/ for instance lists 5 such incidents in the education domain. As a bottom-up approach for AI policies, AI Ecological Education Policy Framework for university teaching and learning has been recently proposed, covering pedagogical, governance and operational dimensions [1]. The AI Act, recently voted by EU Parliament on 14 June 2023, has included AI tools used in education into the risk category. That is because such tools (e.g. automatic graders, plagiarism detectors) do affect the professional path of learners. Henceforth, AI tools used in education will require a certificate of quality issued after an audit performed by a notified body. Expertise on algorithmic accountability and audits is now being accumulated, one example being the European Centre for Algorithmic Transparency in Seville. As a top-down regulatory approach, a proposal considers a global agency for AI, similar to Atomic Energy Agency. No matter regulatory strategy (either bottom-up or top-down), we need to practice the exercise of regulating AI, in order to keep-up with the speed of AI developments. As Gary Marcus has stated: we should regulate AI before it regulates us.

On top of the list of arguments for using ChatGPT in education stands the argument that virtual assistants would contribute to democratisation of education. That is, education will be more accessible for more learners. However, for the moment, a different phenomenon takes place: students who have a subscription to ChatGPT-4 ($20) have higher chances to get good grades than those without a subscription. This newly introduced bias is a challenge that needs to be addressed by university policies.
Keywords:
Artificial Intelligence in Education, AI university policy, ChatGPT.