DIGITAL LIBRARY
RESPONSIBLE GENERATIVE AI USE IN STUDENT ASSESSMENTS: THE CARROT VS THE STICK
The Open University (UNITED KINGDOM)
About this paper:
Appears in: INTED2026 Proceedings
Publication year: 2026
Article: 2142 (abstract only)
ISBN: 978-84-09-82385-7
ISSN: 2340-1079
doi: 10.21125/inted.2026.2142
Conference name: 20th International Technology, Education and Development Conference
Dates: 2-4 March, 2026
Location: Valencia, Spain
Abstract:
As generative AI (GenAI) tools become ubiquitous in higher education, institutions face critical decisions about how to regulate their use in student assessments. The Open University (OU) has adopted a three-tiered policy framework: Category 1 which prohibits GenAI use entirely; Category 2 which permits responsible use at student discretion; and Category 3 which mandates GenAI integration as part of learning outcomes.

This paper examines the practical implementation and student experiences of the first two approaches across two distinct computing modules.
We report on a comparative study where the restrictive policy (Category 1: no GenAI use) was applied to one module (TT284 Web Technologies), while the permissive policy (Category 2: responsible use encouraged) was implemented in another (TM252 Web Technologies). Comparisons between student behaviour on these modules are particularly valuable since TM252 is the direct replacement of TT284 and the student cohorts will have the same academic profile. We can also exclude the impact of developments in GenAI capabilities over time from having an impact since the final presentation of TT284 and the first presentation of TM252 were running contemporaneously.

Student experiences were evaluated across two key assessment components in the three Tutor Marked Assessments (TMAs) and the End of Module Assessment (EMA): practical programming tasks and technical report writing.

Our findings reveal mixed outcomes for both approaches. Students subject to the prohibition (Category 1) in TT284 expressed frustration and reported difficulties in keeping pace with industry-standard practices, academic conduct referrals were higher due to suspected GenAI use and tutors were frustrated that there was little they could do to prevent this. Those allowed discretionary use (Category 2) in TM252 demonstrated varying levels of judgment regarding appropriate application of GenAI, academic conduct referrals were significantly lower and tutors felt empowered to educate students on appropriate academic usage through the inclusion of prompts used as referenced and cited sources. Interestingly, the impact differed between coding and writing tasks, with students showing different competency development patterns depending on the policy applied.

This research contributes to the ongoing debate about balancing academic integrity with preparing students for AI-augmented professional environments. We discuss the implications for assessment design, the challenges of enforcement versus education, and propose recommendations for developing more nuanced, context-sensitive GenAI policies in computing education.
Keywords:
Generative AI, assessment policy, academic integrity, computing education, responsible use.