PERFORMANCE ON MULTIPLE CHOICE EXAMS: THE EFFECT OF RANDOM ORDER
Universidad de Murcia (SPAIN)
About this paper:
Appears in: EDULEARN13 Proceedings
Publication year: 2013
Conference name: 5th International Conference on Education and New Learning Technologies
Dates: 1-3 July, 2013
Location: Barcelona, Spain
Abstract:The effect on the student grades shuffle question from students has been a hotly debated topic. The technique used by teachers to prepare the same test with different versions is a technique widely used, especially today when the number of students is high, diminishing the ability to properly monitor an examination copy. When ordering ask randomly in different test types teachers are guaranteeing the same level of difficulty for all students on equal terms and, at the same time, are increasing the difficulty that students copy their peers. However, a risk behind this technique is that the level of difficulty of an exam is not determined by the level of difficulty of the questions being asked in the exam but the order in which the questions are asked. In this sense, some studies evidence that variation in notes is completely due to the completion of a test or other type (Bresnock et al., 1989; Sue, 2006), while others have specifically shown that the results obtained by students could be affected by the type –in reference to the order of questions- of test performed (Taub y Bell, 1975; Carloson y Ostrosky, 1992; Doerner y Calhoun, 2009). The main explanation for these differences based of arguments that students may perform better on a content-ordered exam because they have less test anxiety. They may also glean information about one question from previous questions and might concentrate better if they do not have to jump from one topic to another.
However, literature has obviated some other factors that may explain types differences in multiple choices’ performance of students. For example, the general qualifications of students, their experiences in education career, their risk aversion levels, and so on. Taking into account these gaps, in this study we try to analyze whether students result vary depending on the multiple choice type being performed using a number of moderating factors as qualifications and experience in subject, age, gender, and nationality.
In order to empirically test the effect of scrambling the content order of multiple-choice questions on a student’s performance and exploring the differences in outputs, we use a sample of 280 type 01 and type 02’s exams done by undergraduate students from the University of Murcia (Spain) belonging to “Health and Safety at Work” subject of the 2012-13 academic year.
Our preliminary results indicate that there are significant differences between the exams’ results, type 01 and type 02. Specifically, there are significant differences in the response procedure, thus students who answered the test examination type 02 left more blank questions in comparison with type 01. To explain and understand the origin of such differences, there have been three segmentations of the study sample in terms of: (1) the scores obtained by the students in the examination test, (2) previous experience, or not, before the test examination of the subject, and (3) the risk taken by the student to take the exam. Specifically, worse grades students (those below the average score), first call exams students, older students (those above the average age), and male students performing multiple choice exams type 02 obtain low levels of scores and grades in terms of more blank answers questions and fewer wrong answers questions.
Keywords: Multiple choice exam, student performance, type differences, personal and professional student characteristics.