DIGITAL LIBRARY
EXPLORING HOW COURSE-TAILORED AI LEARNING ASSISTANTS SHAPE STUDENT ENGAGEMENT AND PERFORMANCE IN INTRODUCTORY COMPUTING
1 Birkbeck College, University of London (UNITED KINGDOM)
2 Codio (UNITED STATES)
About this paper:
Appears in: INTED2026 Proceedings
Publication year: 2026
Article: 1723
ISBN: 978-84-09-82385-7
ISSN: 2340-1079
doi: 10.21125/inted.2026.1723
Conference name: 20th International Technology, Education and Development Conference
Dates: 2-4 March, 2026
Location: Valencia, Spain
Abstract:
Recent advances in LLMs (Large Language Models) have expanded opportunities for providing timely, personalized instructional support and formative feedback in computing education. Previous work demonstrated promising results from deploying a specialized AI learning assistant in asynchronous MOOC-style (Massive Open Online Course) computer science courses, where usage patterns and performance outcomes suggested meaningful gains in engagement, completion rates, and grade performance. This study aims to extend that body of work and explores the effects of customized, course-tailored AI learning assistants designed specifically for an introductory programming course at Birkbeck College.

The course enrolls students from diverse academic and experiential backgrounds. The customized assistants - integrated into the Codio platform - leverages LLMs to provide context-sensitive, 24-7 support aligned with course content and objectives. Different assistants were customized for clarifying programming errors, interpreting assignment instructions, providing feedback on exercises and projects, as well as guiding students through problem-solving using scaffolded hints.

In this study, we analyze student interaction patterns with the assistants alongside established process metrics, including time-on-task, frequency and type of errors encountered, error-resolution duration, and run/submit attempt behaviors. We additionally examine the relationship between assistant use and course performance measures such as assignment grades, and overall course progress. We also compare these metrics across previous cohorts of the same course that did not have these assistants enabled.

This investigation aims to extend earlier evidence by evaluating whether course-specific and pedagogically aligned AI learning assistants can enhance engagement, support productive help-seeking behaviors, reduce frustration, and improve learning outcomes in a hybrid introductory computing course. Insights from this work contribute to a growing understanding of how customized AI tools can be integrated responsibly and effectively into STEM learning environments. We conclude by discussing implications for instructor practice, design considerations for course-tailored assistants, and recommendations for institutions seeking to adopt AI-supported instruction without compromising learning integrity.
Keywords:
Computing, Education, Assessment, AI, LLMs, Support, Feedback, Engagement, Performance.