DIGITAL LIBRARY
STUDENT EVALUATION 2.0
Amsterdam University of Applied Sciences (NETHERLANDS)
About this paper:
Appears in: ICERI2014 Proceedings
Publication year: 2014
Pages: 4924-4926
ISBN: 978-84-617-2484-0
ISSN: 2340-1095
Conference name: 7th International Conference of Education, Research and Innovation
Dates: 17-19 November, 2014
Location: Seville, Spain
Abstract:
As most Universities around the world the Amsterdam University of Applied Sciences conduct surveys (student evaluation monitor: STEM) among their students to evaluate the different courses and their teachers. At the Department of Media, Information and Communication the response by students tend to decline in the course of the year. In 2011-2012 with a limited enrolment of 900 first year students, 70% responded to the first survey conducted after the first exams in October and dropped to 26% in the last survey at the end of the first year (July 2012). In 2012-2013 (with the same amount of students) the response was respectively 75% and 30%. This might be due to several factors, such as the length of the questionnaire, the way the survey is spread (via e-mail to the students University account), the time of spreading the surveys (after the courses and exams) or simple due to lack of interest. Another problem of the surveys is found in the quest to limit the length of the questionnaires. Hereby, some relevant aspects to apprehend the success of students (or the return of the department) and the quality of the courses and teachers aren’t measured, such as: coherence between the courses, the students opinion about the form of education and exams, the connection between the evaluation and the exam results or other influential factors of student’s success. Given these difficulties and the fact that insight in all of the above mentioned aspects are crucial for both students and teachers and not in the least for the management, a new approach for evaluating is needed. An evaluating system that can uncover crucial information, for example to pinpoint the characteristics of dropout or long-term students in order to limit these, and/or improve the education/course. This paper will describe a pilot study wherein a first step towards a new way of evaluating is taken by separating the course- and teacher evaluation from the rest of the surveys by using an app/QR or website. Furthermore, the literature about in- or outside class surveys and student success will serve as a theoretical base for the discussion this pilot and is part of a broader PhD research.
Keywords:
Student evaluation, 2.0, QR, website, digital, in-class evaluation.