DIGITAL LIBRARY
STRUCTURED AI-SUPPORTED ASSESSMENT (SAISA): A NOVEL DESIGN INTEGRATING CHATGPT TO SUPPORT CLINICAL REASONING AND REFLECTIVE ENGAGEMENT IN VETERINARY STUDENTS
City University of Hong Kong (CHINA)
About this paper:
Appears in: INTED2026 Proceedings
Publication year: 2026
Article: 0866 (abstract only)
ISBN: 978-84-09-82385-7
ISSN: 2340-1079
doi: 10.21125/inted.2026.0866
Conference name: 20th International Technology, Education and Development Conference
Dates: 2-4 March, 2026
Location: Valencia, Spain
Abstract:
Background:
Artificial intelligence (AI) is increasingly incorporated into health professions education, yet its integration into summative assessment remains limited. Traditional assessments often emphasize factual recall and do not capture higher-order cognitive skills such as clinical reasoning, reflection, and ethical appraisal. To address this gap, we developed a Structured AI-Supported Assessment (SAISA), embedding ChatGPT into case-based exercises to promote critical engagement with AI while preserving academic integrity and assessment validity.

Methods:
A quasi-experimental study was conducted with three groups of fifth-year veterinary students at City University of Hong Kong. The experimental group (n = 24) completed an equine medicine course using SAISA, while a historical control group (n = 24) completed the same course without AI integration, and a concurrent control group (n = 24) took a small-animal medicine course delivered conventionally. SAISA comprised AI-supported role-playing consultations, case-solving exercises, and student-designed case critiques. Final case-based examination scores served as the primary outcome. A post-course survey adapted validated scales to evaluate higher-order thinking, motivation, engagement, and self-efficacy. Statistical analyses used linear mixed-effects models and chi-square tests, with significance set at p < 0.05.

Results:
Students exposed to SAISA achieved significantly higher final examination scores than both the historical (mean difference = 14.3 percentage points; p < 0.001) and concurrent control groups (mean difference = 5.5 percentage points; p = 0.02). Survey findings showed strong reflective and critical engagement: 91% of students reported carefully reviewing AI outputs, 87% reflected on errors, and 87% considered multiple diagnostic approaches. Most participants (74%) believed AI training is important for their professional development, and 68% reported greater confidence in clinical reasoning when using AI support.

Conclusion:
Structured integration of ChatGPT into summative assessment improved veterinary students’ case-based performance, reflection, and confidence while fostering responsible AI use. SAISA demonstrates a scalable approach for developing higher-order reasoning and AI literacy in healthcare education.
Keywords:
Veterinary education, Clinical reasoning, ChatGPT, Artificial intelligence, Critical thinking, Assessment.