DIGITAL LIBRARY
SO WE’RE EMBRACING LLMS? NOW WHAT?: A STUDY ON ENHANCING FEEDBACK AND ASSESSMENT IN HIGHER EDUCATION THROUGH GENERATIVE AI
University of Sunderland (UNITED KINGDOM)
About this paper:
Appears in: EDULEARN24 Proceedings
Publication year: 2024
Pages: 2486-2494
ISBN: 978-84-09-62938-1
ISSN: 2340-1117
doi: 10.21125/edulearn.2024.0683
Conference name: 16th International Conference on Education and New Learning Technologies
Dates: 1-3 July, 2024
Location: Palma, Spain
Abstract:
The rapidly evolving landscape of Artificial Intelligence (AI) in educational settings has opened new avenues for enhancing teaching and learning practices. Within this wave of technological advancements, the field of Generative AI, especially through the development of Large Language Models (LLMs) such as ChatGPT, Google’s Bard & Gemini, and Meta's Llama 2, has demonstrated remarkable proficiency in parsing and interpreting complex human language tasks. Institutional attitudes are changing, seeking to embrace this technology. This capability holds the promise of revolutionising the way feedback is delivered in academic environments.

In the landscape of Higher Education (HE), the shift towards authentic assessment marked a pivotal change, prioritising real-world relevance in learner evaluation. This method's essence lies not just in assessing students' abilities, but in preparing them for practical applications of their learning; fostering critical thinking and problem-solving skills. Within this context, feedback forms a cornerstone of the learning process, transcending its traditional function of right or wrong answers. By providing learners with meaningful and actionable feedback, educators are able to highlight areas for enhancement which can be applied to future challenges. The value of feedback lies not just in its capacity to assess but in its ability to empower learners, facilitating deeper engagement.

Assessment plays a crucial role in HE, guiding both student learning and instructional methods. However, the traditional marking process is often weighed down by challenges of scalability and timeliness, particularly in settings with large student cohorts. As such, the quality and depth of feedback, vital to learner improvement, can suffer as a result.

In response to these challenges, this work explores how LLMs can be leveraged to offer meaningful and contextually relevant feedback based on initial instructor-provided feedback and scoring, thereby enriching the learner experience whilst grounding the assessment itself in human academic judgement; the aim being to augment existing workflows. Current approaches to the use of Generative AI may seek to remove human academic judgement, and have work entirely marked and assessed by AI. However, in this study we look at the academic viability of integrating such Generative AI tools into marking workflows, augmenting instructor-produced critique as opposed to assessing directly, as well as gathering student perceptions on use of such technologies for providing summative feedback. Unlike the reliance on widely known proprietary online tools, our work focuses on the development and application of LLMs in an offline scenario to meet educational needs and objectives, reducing potential issues of data governance. Through the exploration of LLM applications towards improving summative assessment feedback, this study contributes to the broader discourse on integrating AI within education.
Keywords:
Generative AI, Large Language Models, Summative Feedback, Artificial Intelligence, AI.