DIGITAL LIBRARY
WHAT’S IN A SPELLCHECKER: ARE WE SIMPLY WITNESSING THE NEXT EPOCH IN TOOLS FOR SUPPORTING LEARNING?
University of Sunderland (UNITED KINGDOM)
About this paper:
Appears in: EDULEARN24 Proceedings
Publication year: 2024
Pages: 5076-5084
ISBN: 978-84-09-62938-1
ISSN: 2340-1117
doi: 10.21125/edulearn.2024.1244
Conference name: 16th International Conference on Education and New Learning Technologies
Dates: 1-3 July, 2024
Location: Palma, Spain
Abstract:
The very nature of education encompasses the systematic process of acquiring knowledge, skills, and values. Education and processes of, develop, change, and evolve over time. Indeed, the process of learning has changed significantly in the past 20-30, with assessment methods predominately focused on exams and rote learning. We have recently seen a move away from this towards more ‘authentic assessment’ methods. Aligning methodologies, briefs, and modes of assessment to a more ‘authentic’ style has been a significant change.

There have been many changes in approaches to education and the assessment thereof, more specifically to what a student might be expected to be assessed on. For example, spelling and grammar would form part of the assessment grading criteria, elements such as structure and style (even if not part of the subject matter) would be assessed. With Computers and the use of Microsoft Office being much more prevalent, it’s accepted, indeed encouraged, that spelling and grammar would be checked, amended and even new suggestions provided automatically via these software tools.

As technology has grown in prevalence and pervasiveness new tools have become available, accepted for use in assessment and even actively encouraged by academics and institutions. Indeed, it would now be seen as somewhat reductive to prevent the use of spelling and grammar checkers. Such tools and services such as Turnitin and Studiosity are now being actively encouraged by universities and used to review a learner’s work, providing feedback on structure, spelling, grammar, layout and flow. Enter the era of Generative AI and Large Language Models (LLM) such as ChatGPT. These have sent the educational world into a metaphorical spin, as these new technologies not only provide feedback on structure and spelling but can, from a simple request, write entire bodies of work, answer questions, provide information and write operational and functioning code.

With the meteoric rise of Generative AI and ChatGPT, the education sector, at least in terms of policy and regulation, seems to have been caught unaware. With no real policies in place nor the realisation of the impact ChatGPT would have, the sector seems to have, initially at least, taken the decision to ban the use of LLMs for most if not all assessments. This has created a dichotomy between assessing a student’s own competencies, but also exposing them to tools and technologies that are used, and will continue to be used, in industry. There’s certainly a need for the education sector to take a step back and reassess the landscape, looking at the impact and policy changes needed to support these technologies.

In this paper we survey a wide range of stakeholders including current and former (graduated) students, university academic and professional services staff, from a range of subjects and institutions, and industry employers, obtaining a rounded and broad view of the acceptance, perception and ethical understanding of LLMs such as ChatGPT for use in assessments. The questions we ask focus on the nature of ‘authentic assessment’, the change in industry technologies (such as the adoption of Generative AI), and how we ensure we are assessing student learning and competencies. Early results show a stark split in acceptance of LLMs for learning vs assessment, but most interestingly results show a split by academic subject domain.

We ask: What does this mean for the future of education and learning?
Keywords:
Education, Artificial Intelligence, ChatGPT, Large Language Models, Learning Outcomes.