RETHINKING FORMATIVE ASSESSMENT: HOW AI, TEACHER, AND PEER FEEDBACK SHAPE STUDENTS’ ACADEMIC WRITING DEVELOPMENT
King Saud University (SAUDI ARABIA)
About this paper:
Conference name: 20th International Technology, Education and Development Conference
Dates: 2-4 March, 2026
Location: Valencia, Spain
Abstract:
Developing academic writing remains a central challenge in higher education, particularly for students who enter university with varied levels of preparedness and confidence. Although formative assessment is widely recognized for its effectiveness, questions persist regarding how different feedback modalities—especially AI-generated feedback—support students’ writing development, self-regulation, and understanding of academic expectations. This ongoing study addresses this gap through a comparative investigation of three feedback types: AI-generated feedback via ChatGPT, instructor-generated rubric-based feedback, and structured peer feedback.
The study is currently being conducted at King Saud University with 60 female undergraduate students, divided into three groups of 20, all enrolled in an Educational Technology course taught by the researcher. Over a full semester, students produce three argumentative essays, all evaluated using the same analytic rubric designed and validated by the researcher. The rubric assesses coherence, argumentation, use of evidence, structure, and language accuracy, ensuring comparability across feedback modalities.
The research design follows a quasi-experimental repeated-measures approach. The AI feedback group receives standardized rubric-aligned feedback generated by ChatGPT; the teacher feedback group receives detailed instructor comments; and the peer feedback group uses a simplified rubric with structured prompts. Quantitative data include rubric scores for each writing task and pre–post measures of writing proficiency, analyzed using repeated-measures ANOVA and effect sizes. Qualitative data include samples of feedback comments and students’ reflections, coded by specificity, clarity, cognitive demand, and scaffolding function.
The study is theoretically grounded in Vygotsky’s (1978) Zone of Proximal Development, Wood, Bruner, and Ross’s (1976) scaffolding model, and the principles of formative assessment (Black & Wiliam, 1998) and feedback-for-learning (Hattie & Timperley, 2007). It examines whether AI can serve as a form of expert-like scaffolding and how learners interpret feedback differently depending on modality, cognitive load, and clarity.
Findings are expected to advance theories of scaffolding, feedback interpretation, and self-regulation, while offering practical guidance for teachers on integrating AI tools effectively. The study also provides institutions with a scalable model for using AI to enhance feedback processes, reduce workload, and support writing improvement across large cohorts.
Overall Significance:
At a time when educational systems worldwide struggle with how to integrate generative AI into teaching and assessment, this study offers timely, systematic evidence on the pedagogical value and limitations of AI-mediated feedback. By comparing AI, teacher, and peer feedback under controlled conditions with a unified rubric and repeated writing cycles, the study provides a nuanced understanding of how different feedback types shape writing development, self-regulation, and students’ internalization of academic norms.
As an ongoing project, the study continues to generate insights that will contribute to both theory and practice, laying the groundwork for a more adaptive, learner-centered, and technologically enriched writing pedagogy in higher educationKeywords:
AI feedback, teacher feedback, peer feedback, academic writing, scaffolding, formative assessment, ChatGPT, higher education, writing development, ZPD.