DIGITAL LIBRARY
SUPPORTING STUDENT AGENCY IN GENERATIVE AI INTEGRATION: IMPLEMENTATION REFLECTIONS FROM AN EVOLVING HIGHER EDUCATION LANDSCAPE
University College Dublin (IRELAND)
About this paper:
Appears in: INTED2026 Proceedings
Publication year: 2026
Article: 0458 (abstract only)
ISBN: 978-84-09-82385-7
ISSN: 2340-1079
doi: 10.21125/inted.2026.0458
Conference name: 20th International Technology, Education and Development Conference
Dates: 2-4 March, 2026
Location: Valencia, Spain
Abstract:
The advent of commercially available GenAI tools such as ChatGPT from 2022 onwards has accelerated existing conversations about assessment validity in higher education. Staff designing assessments are unsure how to balance the integrity of the assessment and deal with the ethical challenges it provides for them and their students. Equity of access to the tools and ethical discomfort with its use particularly concern students. One way forward through this ‘wicked problem’ is to allow some level of choice for students in how they can use GenAI. In response to the challenges posed by commercial GenAI the researchers of this paper, who are educational developers, introduced a choice for students in engaging with the tools in an assessment delivered in 2024. On the first run, this proved successful (Wolf & O'Neill, 2025), however both the higher education and commercial GenAI contexts are swiftly changing and the initial assessment design was quickly becoming dated.

This paper focuses on how the researchers built on their previous study (Year 1 of implementation). Using an action research approach over the following two academic years, further reflections and changes were made to these choices in line with the changing discourse on GenAI use in higher education literature and practices. Following reflections post year 1, key changes included changing the language away from ‘thought partner’ and ‘authors’ as this could be conceived as anthropomorphising and attributing human-like traits to GenAI. The literature around GenAI use was also beginning to inform the sector around the diverse ways students were starting to use it (beyond accusations of cheating.) These included options such as structuring, editing, evaluating, and task completion. Building on the work of Perkins et al (2024) these emerging options were presented to students in the module as alternatives choices in year two. As had been done in previous iterations, the researchers developed a series of supports for the students around the choices, including, rubrics and information around the equitability of these choices. Following this implementation, reflections based on the student feedback emphasised that although generally positive, there was some overlap between the choices and at times there were overly complex instructions. The literature was also developing a more nuanced understanding of how students could potentially use GenAI, identifying more creative ways that it could be used, i.e ‘Full AI’ and ‘AI exploration’. Therefore, the final iteration (Year 3) of the choices for students allowed students to choose from any of the revised Perkins et al’s (2025) five levels. A key challenge in this design, however, was how to ensure equity of effort between students, where in the same assignment students may either choose to completely use GenAI to complete their assignment or not use GenAI and had to write it all themselves. Developing different weightings for the different subcomponents was one approach to addressing this. Further reflections on how successful this was achieved will be known in December 2025.

This three-year study demonstrates how student choice in GenAI use can be iteratively refined through action research. Significant questions remain, particularly regarding equity of effort, yet the documented process offers educators a responsive framework for navigating tensions between student agency, academic integrity, and ethical practice in assessment design.
Keywords:
Assessment choice, GenAI, student agency, reflection on practice.