About this paper

Appears in:
Pages: 4924-4926
Publication year: 2014
ISBN: 978-84-617-2484-0
ISSN: 2340-1095

Conference name: 7th International Conference of Education, Research and Innovation
Dates: 17-19 November, 2014
Location: Seville, Spain


N. Wesseling

Amsterdam University of Applied Sciences (NETHERLANDS)
As most Universities around the world the Amsterdam University of Applied Sciences conduct surveys (student evaluation monitor: STEM) among their students to evaluate the different courses and their teachers. At the Department of Media, Information and Communication the response by students tend to decline in the course of the year. In 2011-2012 with a limited enrolment of 900 first year students, 70% responded to the first survey conducted after the first exams in October and dropped to 26% in the last survey at the end of the first year (July 2012). In 2012-2013 (with the same amount of students) the response was respectively 75% and 30%. This might be due to several factors, such as the length of the questionnaire, the way the survey is spread (via e-mail to the students University account), the time of spreading the surveys (after the courses and exams) or simple due to lack of interest. Another problem of the surveys is found in the quest to limit the length of the questionnaires. Hereby, some relevant aspects to apprehend the success of students (or the return of the department) and the quality of the courses and teachers aren’t measured, such as: coherence between the courses, the students opinion about the form of education and exams, the connection between the evaluation and the exam results or other influential factors of student’s success. Given these difficulties and the fact that insight in all of the above mentioned aspects are crucial for both students and teachers and not in the least for the management, a new approach for evaluating is needed. An evaluating system that can uncover crucial information, for example to pinpoint the characteristics of dropout or long-term students in order to limit these, and/or improve the education/course. This paper will describe a pilot study wherein a first step towards a new way of evaluating is taken by separating the course- and teacher evaluation from the rest of the surveys by using an app/QR or website. Furthermore, the literature about in- or outside class surveys and student success will serve as a theoretical base for the discussion this pilot and is part of a broader PhD research.
author = {Wesseling, N.},
series = {7th International Conference of Education, Research and Innovation},
booktitle = {ICERI2014 Proceedings},
isbn = {978-84-617-2484-0},
issn = {2340-1095},
publisher = {IATED},
location = {Seville, Spain},
month = {17-19 November, 2014},
year = {2014},
pages = {4924-4926}}
AU - N. Wesseling
SN - 978-84-617-2484-0/2340-1095
PY - 2014
Y1 - 17-19 November, 2014
CI - Seville, Spain
JO - 7th International Conference of Education, Research and Innovation
JA - ICERI2014 Proceedings
SP - 4924
EP - 4926
ER -
N. Wesseling (2014) STUDENT EVALUATION 2.0, ICERI2014 Proceedings, pp. 4924-4926.