DIGITAL LIBRARY
ADAPTING WRITING PROMPTS AND CORRECTING RUBRICS: AVOIDING AI GENERATED STUDENT WRITING ASSIGNMENTS
Penn State University (UNITED STATES)
About this paper:
Appears in: INTED2024 Proceedings
Publication year: 2024
Pages: 1664-1670
ISBN: 978-84-09-59215-9
ISSN: 2340-1079
doi: 10.21125/inted.2024.0478
Conference name: 18th International Technology, Education and Development Conference
Dates: 4-6 March, 2024
Location: Valencia, Spain
Abstract:
I am an art historian at a very large research university in the United States. I have developed and currently teach three large online courses, and will soon be adding a fourth. I generally have about 300-350 students per semester in the fall and spring, and a maximum of 200 students in the summer. Beginning in Spring 2023, I started to notice that some student responses to discussion board prompts were vague and disorganized, and that they generally were written in five short paragraphs of empty prose. These same responses all offered a strikingly similar set of points and ideas, and they were very unsatisfactory in comparison with other student responses. Conversations about AI generated writing had just begun at our institution among the faculty, and it was fairly easy to reproduce these poor discussion responses using Chatbot AI. Unfortunately, without a syllabus statement or targeted rubric, I did not feel comfortable confronting students about these issues, although I did lower grades based on perceived effort and writing quality. Beginning in the Summer 2023 term, I rewrote the prompts and rubrics for my major writing assignments (take-home essays in which students design a virtual exhibition of works based on a course theme) and added AI-specific language to my plagiarism and academic integrity statements. The result, for most students, was higher quality writing on the targeted writing assignments. There were a few students who seemed to rely on AI generated writing, but my grading rubric targeted the vague statements and empty prose, allowing me to lower their grades without getting into an argument about whether they had relied on Chatbot or another tool. My summer classes do not have discussion boards: they are shorter classes with fewer assessments. This semester (Fall 2023), I have noticed that I continue to receive better quality writing on the two major assignments (the virtual exhibitions), but that students are also experimenting with AI generated writing in the lower stakes discussion boards. I have done very little to change the prompts on the discussion boards so far, but I have adapted my grading rubric. My plan for Spring 2024 is to reduce the number of discussion boards and to increase their point value in order to see if having fewer, more significant discussion boards works as a deterrent. I will also rewrite the prompts in order to encourage students not to use AI (without specificially mentioning AI). I also plan to introduce additional motivational language into my course introduction and other materials in order to underscore the cognitive benefits of engaging and learning rather than allowing AI to do the work. When I get to Valencia I will be able to present the preliminary results of these changes, along with information about which students seem more likely to rely upon AI generated writing (traditional undergraduates or returning adult students?). Since it is impossible to simply forbid AI use, it is important to teach students about the limitations of such tools and to emphasize the benefits of engaging in course learning activities and assessments without AI. This is an ongoing discussion related to a technology that is constantly changing, and I look forward to reporting on my successes and my continuing challenges as part of that discussion.
Keywords:
AI generated writing, assessment, Chatbot AI, writing prompts, grading rubrics, adaptation.