EXPERT CONSENSUS ON THE USE OF GENERATIVE ARTIFICIAL INTELLIGENCE IN PRE-SERVICE TEACHER EDUCATION: PEDAGOGICAL POTENTIAL, RISKS AND IMPLICATIONS FOR ACADEMIC INTEGRITY
Balearic Islands University (SPAIN)
About this paper:
Conference name: 20th International Technology, Education and Development Conference
Dates: 2-4 March, 2026
Location: Valencia, Spain
Abstract:
This study explores expert perspectives on the pedagogical potential, risks and implications for academic integrity of generative artificial intelligence (AI) in initial teacher education. Using a three-round Delphi design, we consulted a purposive panel of 24 experts from universities in ten European countries (Spain, Portugal, Sweden, Finland, France, Greece, Slovenia, Iceland, Denmark and Cyprus), including specialists in teacher education, educational technology and academic integrity. Panel members evaluated 68 statements derived from a prior scoping review and focus groups with pre-service teachers, using a 5-point Likert scale and open comments. Consensus was defined as ≥75% agreement and an interquartile range ≤1; qualitative data were thematically coded to refine and contextualise quantitative findings.
Experts converged on a view of AI as a potentially transformative, yet ambivalent, presence in pre-service teacher education. Strong consensus emerged around the value of AI tools to support lesson planning, differentiated instruction, feedback on written work and modelling of formative assessment practices. The panel agreed that early, structured engagement with AI can foster critical digital literacy, raise awareness of algorithmic bias and prepare future teachers to guide pupils’ responsible use. At the same time, experts identified significant risks. High-consensus statements highlighted increased opportunities for plagiarism, contract cheating and ghost-writing, as well as the erosion of students’ abilities in academic writing, argumentation and problem solving. Participants also stressed equity concerns related to differential access to paid AI tools and language bias affecting speakers of minority languages.
Regarding academic integrity, the panel endorsed a shift from purely punitive approaches towards educative and preventive strategies. Consensus was reached that programme-level policies must explicitly address AI-assisted work, that assessment tasks should be redesigned to emphasise process, reflection and oral defence, and that transparent disclosure of AI use should be normalised as a component of scholarly practice. Experts recommended that teacher education institutions embed AI-related integrity training across the curriculum rather than confining it to isolated workshops.
The Delphi process resulted in a set of prioritised recommendations:
(a) developing institution-wide guidelines that distinguish acceptable, questionable and unacceptable uses of AI;
(b) integrating critical AI literacy and integrity education into core pedagogical modules;
(c) diversifying assessment formats to reduce reliance on easily automatable tasks; and
(d) investing in staff development so that teacher educators can model ethical and pedagogically sound AI use.
Overall, the study concludes that the challenge is not whether AI should be used in pre-service teacher education, but how it can be harnessed in ways that enhance learning while safeguarding academic integrity and promoting future teachers’ professional responsibility.
Acknowledgement:
This paper has been elaborated within the framework of the project PID2022-141031NB-I00, funded by MICIU/AEI /10.13039/501100011033/ and by ERDF “A way of making Europe”, and the project AI-UPskilled 2025-1-ES01-KA220-HED-000355590, co-funded by the European Union.Keywords:
Artificial intelligence, pre-service teachers, initial teacher education, academic integrity, Delphi study.