DIGITAL LIBRARY
CONSTRUCTIVE BY DESIGN? EVALUATING AN AI FEEDBACK COACH IN COLLABORATIVE LEARNING CONTEXTS
Vrije Universiteit Amsterdam (NETHERLANDS)
About this paper:
Appears in: INTED2026 Proceedings
Publication year: 2026
Article: 1458
ISBN: 978-84-09-82385-7
ISSN: 2340-1079
doi: 10.21125/inted.2026.1458
Conference name: 20th International Technology, Education and Development Conference
Dates: 2-4 March, 2026
Location: Valencia, Spain
Abstract:
Processing feedback is vital for the development of academic and professional skills. One such skill is the ability to provide effective peer feedback, which supports the development of critical thinking and enhances employability. Similar to other skills, providing high-quality feedback requires practice and receiving feedback on one’s own feedback to improve. However, it is often unfeasible for university instructors to review and respond to student feedback comments promptly when managing large cohorts. In this context, GenAI (Generative Artificial Intelligence), particularly LLMs (Large Language Models), holds considerable promise in enabling immediate feedback on student work. As such, LLMs are increasingly inserted into various educational applications, such as tutor bots and feedback systems. Nevertheless, scholarly data concerning the impact of such technologies on student behavior and the ramifications for students’ learning processes remains relatively limited. Understanding student-technology interactions is crucial for enabling integration of genAI that endorses, rather than compromises, pedagogical objectives.

This research examines the implementation of a GenAI-based AFC (Automated Feedback Coach) in peer feedback assignments on collaboration and presentation skills, within an undergraduate course at Vrije Universiteit Amsterdam in the Netherlands. The research is guided by the following question: How does the AFC influence student engagement in the peer feedback process and the feedback they provide?

To address this question, a mixed-methods approach was employed, collecting both qualitative and quantitative data. An anonymous post-course survey was administered to capture aspects such as usage frequency, perceived quality, trust, feedback behavior, and perceptions of the peer feedback process (N=72). This survey comprised open-ended questions and Likert-scale items derived from validated instruments, including the Feedback Literacy Behavior Scale. Furthermore, data collected from semi-structured student focus groups (N=72), supplemented by teacher observations, were used to enrich the survey data. Lastly, feedback comments from the current year were analyzed for characteristics such as length, valence, and complexity, and were compared with those from the previous year.

Preliminary results indicate that many students have used the tool for their feedback tasks. Most students reported employing AFC suggestions to enhance the specificity, criticality, and incorporation of feedforward in their feedback. Sentiment analysis shows that respondents were generally positive about the overall AFC experience. Moreover, quantitative measures indicate a strong consensus that AFC feedback was perceived as justified, useful, and adequate. However, the qualitative data also suggests that the tool encourages cognitive offloading, serving more as a checklist rather than inciting critical thinking. Some students also expressed frustration with the user interface, which negatively impacted their experience.

Additionally, we will provide an analysis of the AFC’s impact on student feedback characteristics, such as length and complexity, as well as teachers’ perceptions and experiences with the tool. Based on our findings, we present recommendations for educators and tool developers concerning the constructive integration of genAI into educational feedback practices.
Keywords:
Peer feedback, technology, generative AI, Large language models, Automated feedback, artificial intelligence.