USING MACHINE LEARNING FOR AUTOMATED GRADING OF STUDENT SCIENCE WRITING
University of Arizona (UNITED STATES)
About this paper:
Conference name: 16th International Conference on Education and New Learning Technologies
Dates: 1-3 July, 2024
Location: Palma, Spain
Abstract:
A challenge in teaching large classes for formal or informal learners is assessing writing. As a result, most large classes, especially in science, use objective assessment tools like multiple-choice quizzes. The rapid maturation of AI has created the possibility of using large language models (LLMs) to assess student writing. An experiment was carried out using GPT-3.5 and GPT-4 to see if machine learning methods based on LLMs can rival peer grading for reliability and automation in evaluating short writing assignments on topics in astronomy. The audience was lifelong learners in three massive open online courses (MOOCs) offered through Coursera. However, the results are applicable to non-science majors in university settings. The data was answers from 120 students on 12 questions across the three courses. The LLM was fed with total grades, model answers, and rubrics from an instructor for all three questions. In addition to seeing how reliably the LLMs reproduced instructor grades, the LLMs were asked to generate their own rubrics. Overall, the LLMs were more reliable than peer grading, both in the aggregate and by individual student, and they came much closer to the instructor grades for all three of the online courses. GPT-4 generally outperformed GPT-3.5. The implication is that LLMs can be used for automated, reliable, and scalable grading of student science writing.Keywords:
Artificial intelligence, machine learning, technology, science, student writing.