NATURAL LANGUAGE PROCESSING APPLIED FOR TEACHING EVALUATION
Universidad Rey Juan Carlos (SPAIN)
About this paper:
Conference name: 15th annual International Conference of Education, Research and Innovation
Dates: 7-9 November, 2022
Location: Seville, Spain
Abstract:
In the context of higher education, the improvement of teaching quality is a constant challenge. Student ratings of teaching are often considered fundamental for measuring the quality of teaching, educational development, and the enhancement of student learning.
In the university context, the institution develops methods for measuring teaching quality and teacher effectiveness defining the process of collecting data and making judgments. However, for decades, there has been a debate about whether student ratings of teaching be trusted. Different studies have shown that several factors influence student ratings as teacher body language, timing, or student fatigue, among others. On the other hand, student ratings and student learning are unrelated. The surveys must become the starting point for critical discourse on effective teaching practices. Teaching thousands of students without engaging them in discussions about their learning experiences is not rational. In this sense, we propose an easy way to collect the students' opinions for teaching evaluation. We developed an anonymous online form, with several questions about different general aspects, shared by all subjects, such as theory, practice, evaluation, etc., and a final question to get the general evaluation of the subject. Each question must be answered with a numerical value between 1 and 5; the student can introduce text to explain or justify the numerical value. The proposed form is sufficiently general to be applied to any subject. Still, at the same time, with the part of text answers that accompanies each question, plus the general open question, the form allows the singularities of each subject to be considered.
The objective is to know the student's opinion about some aspects of subjects through a general question. Concretely, the system aims at understanding all that a student wants to outline about the subject, good or bad, and the possible correlation or not between numerical and text responses. The proposed system automatically identifies the sentiment of text answers through Natural Language Processing techniques.
We collected student feedback for four different subjects from 2-degree programs. First, we could see that approximately 40% of the enrolled students responded to the form. In the case of the university's institutional surveys, they are usually done by more students because they are mandatory, but this means that the answers they give could be not sincere. On the other hand, we have taken as a gold standard the numerical response of the students to the open-ended question about the general assessment, and we have contrasted it with the results of the automatic sentiment analysis. A significant correlation was found for negative evaluations, whereas the less was for positive ones. The students indicated that they liked the subject according to the numerical evaluation but did not express the same in the text response. Consequently, it can indicate two things: either that the sentiment analyzer needs to be adjusted or that the students are inconsistent in their answers.
Further work will be done on the sentiment analysis system, reviewing the vocabulary used by the students, extending the study to all the questions on the form, and analyzing whether the students' opinions on the overall rating are different from those stated on the form itself. Keywords:
Natural Language Processing, Sentimental Analysis, Student performance, Educational Data Mining.