DIGITAL LIBRARY
AN AI EDUCATIONAL TOOL FOR DETECTING REDUNDANCY IN DISTRACTORS AND ITEMS WITHIN MULTIPLE-CHOICE TESTS
1 Regional Center of Education and Training Professions (MOROCCO)
2 Regional Academy of Education & Training, Ministry of National Education (MOROCCO)
3 Abdelmalek Essaadi University (MOROCCO)
About this paper:
Appears in: INTED2024 Proceedings
Publication year: 2024
Pages: 6454-6458
ISBN: 978-84-09-59215-9
ISSN: 2340-1079
doi: 10.21125/inted.2024.1691
Conference name: 18th International Technology, Education and Development Conference
Dates: 4-6 March, 2024
Location: Valencia, Spain
Abstract:
In recent years, educators have gradually adjusted their assessment approach by adopting Multiple Choice Questions (MCQs) in most exams and assessments, especially in competitive exams with many items.

The competition organizers recommend that educators who design assessment situations undergo a “cobaying” exercise, a term describing the practice of "piloting an exam" or "beta-testing an exam". This involves educators or professionals taking the same exam as the candidates to assess the relevance, difficulty, and feasibility of the questions. The purpose is to evaluate the exam, ensuring that the questions are clear, appropriately challenging, and can be completed within the allotted time. This practice allows exam designers to adjust and refine the questions before administering the exam to students, contributing to the overall quality of the assessment. It is a method aimed at ensuring that the exam is fair, just, and effectively measures the students' skills.

One crucial step in this process is to verify the quality of the exam items, either by approving the questions analyzed or rejecting them. While this step undoubtedly requires a significant time investment for the designers, there remains a concern that certain elements could escape their vigilance. This includes the possibility of overlooked repetitions or redundancies that might persist even after careful review and adjustment of exam questions. Thus, continued vigilance and periodic reviews are essential to resolve potential issues that might escape initial review.

In this work, we proposed an innovative approach to strengthen the quality of exams by exploiting deep learning techniques to identify and resolve redundancy problems. Following an overview of the fundamental concepts and principles associated with exam creation and a review of deep learning algorithms used in text manipulation, an intelligent tool for detecting questions and distractors redundancy in a multiple-choice test is presented. The use of this specifically developed tool was evaluated in the quality verification phase of the Moroccan future teacher’s recruitment competition tests for the 2022-2023 academic year and was provided promising results in terms of timesaving, optimization of question diversity, efficiency, and reducing candidate’s complaints.
Keywords:
Assessments, MCQs, items, distractor, cobaying, deep learning, Natural language process.