DIGITAL LIBRARY
MITIGATING MISINFORMATION IN AI-POWERED EDUCATIONAL SYSTEMS: THE FORT VERIFICATION APPROACH
University of Malta (MALTA)
About this paper:
Appears in: INTED2025 Proceedings
Publication year: 2025
Pages: 6414-6418
ISBN: 978-84-09-70107-0
ISSN: 2340-1079
doi: 10.21125/inted.2025.1658
Conference name: 19th International Technology, Education and Development Conference
Dates: 3-5 March, 2025
Location: Valencia, Spain
Abstract:
The increasing integration of Artificial Intelligence, particularly Large Language Models, in educational systems provides the potential to revolutionise personalised learning experiences. However, the susceptibility of LLMs to generating misinformation and hallucinations necessitates the development of robust verification techniques to ensure student safety and trust. This research paper presents FORT Verification, a novel multi-stage verification framework specifically designed to mitigate the known risks of AI-powered educational systems. FORT Verification targets the known weaknesses of LLMs in educational settings, such as hallucinations, irrelevant information, and potential misuse, by leveraging AI concepts to enhance the accuracy, relevance, and appropriateness of AI-generated content. The approach combines the strengths of Generative Adversarial Networks, Retrieval Augmented Generation, and online cross-referencing techniques, ensuring the trustworthiness of educational content while safeguarding against potential pitfalls. The efficacy of FORT Verification was evaluated through a combination of benchmark-testing and real-world testing and feedback. Utilising a modified GSM8K benchmark the FORT framework was able to correctly identify hallucinations and incorrect information with an accuracy of 90%. The results highlight the superior performance of FORT Verification compared to other approaches, particularly in addressing mathematical reasoning tasks and ensuring the delivery of factually accurate information to students.
Keywords:
AI in Education, Large Language Models, Hallucinations, Mitigation Techniques.