DIGITAL LIBRARY
CAN ARTIFICIAL INTELLIGENCE ASSIST FACULTY? DESIGN PRINCIPLES FOR AI-SUPPORTED QUESTION GENERATION IN HIGHER EDUCATION
1 Vienna University of Economics and Business (AUSTRIA)
2 Knowledge Markets (AUSTRIA)
About this paper:
Appears in: INTED2026 Proceedings
Publication year: 2026
Article: 1311
ISBN: 978-84-09-82385-7
ISSN: 2340-1079
doi: 10.21125/inted.2026.1311
Conference name: 20th International Technology, Education and Development Conference
Dates: 2-4 March, 2026
Location: Valencia, Spain
Abstract:
University faculty are increasingly burdened by the expanding demands of teaching, research, and administrative responsibilities, leaving limited time for the labor-intensive task of developing high-quality assessments. Designing valid and reliable multiple-choice questions (MCQs) is particularly time-consuming, with estimates indicating that creating a single high-quality item can require 45 to 90 minutes of dedicated effort. In large-enrollment courses, where multiple MC exams are administered each semester, this workload quickly becomes unmanageable.

Considering these challenges, recent advances in Artificial Intelligence (AI) offer the promising opportunity to substantially reduce this burden by automatically generating MCQs. However, business education at the university level requires the evaluation of higher-order cognitive processes, including strategic reasoning, case analysis, and decision-making under uncertainty. Although prior studies have identified notable limitations of AI-generated items, including implausible or uneven distractors, subtle linguistic cues in stems, and suboptimal alignment with intended learning outcomes, these findings reflect the capabilities of earlier language models. It therefore remains an open question whether AI-generated MCQs with contemporary systems can meet the cognitive demands of business education.

To address these gaps, this study investigates whether AI-generated MCQs can match the quality of human-written items in undergraduate and graduate business education courses. In addition, it examines how such AI tools could be responsibly integrated into university assessment practices. Employing a design science approach, the research involves the creation of an artifact, a service for generating MCQs called Quizmentor, accompanied by qualitative interviews to derive underlying design principles. This combined approach not only facilitates the creation of a practical tool but also deepens the understanding of the challenges faced by educators in assessment development.

At the time of writing, Quizmentor is available as a working prototype, providing a realistic context for conducting interviews. In total, four pre-prototype interviews were conducted across two universities, followed by three post-prototype interviews across three universities.

The qualitative interview findings yield several design principles for AI-supported question tools. First, the underlying large language models must comply with the General Data Protection Regulation (GDPR) and be free from potential copyright infringements. Second, the user interface must emphasize high usability and actively support human-AI co-development, or mentoring of questions. It is crucial that such tools avoid giving users the impression that AI-generated MCQs are perfect or require no human refinement. Third, the output formats should be interoperable, allowing the use in both paper-based and digital assessments. Finally, AI-supported question tools should accommodate multiple input formats ranging from presentations to textbooks. Additionally, they should integrate with the “digital home” of faculty, such as LMS platforms or other content repositories, to enable seamless access to sources.
Keywords:
AI tools, assessment, business education.