DIGITAL LIBRARY
WEB TOOL FOR THE COMPARISON OF MULTIPLE-CHOICE SCORING SCHEMES
TU Ilmenau (GERMANY)
About this paper:
Appears in: INTED2021 Proceedings
Publication year: 2021
Pages: 4105-4113
ISBN: 978-84-09-27666-0
ISSN: 2340-1079
doi: 10.21125/inted.2021.0837
Conference name: 15th International Technology, Education and Development Conference
Dates: 8-9 March, 2021
Location: Online Conference
Abstract:
The in exams often used Multiple-Choice-tasks (MC-tasks) have many advantages and benefits compared to tasks of the essay-type. They are easily understandable and can be used in nearly every field of topic. Beside that the evaluation of answers for such a test is simple and fast. It can also be done by a machine, which is useful for e-learning tests or if the exam has a large number of participants.
On the opposite MC-tests have a huge disadvantage in contrast to other types of tasks in exams. Due to the given set of answers there is a high chance that a student only guesses the correct one. So, the teachers cannot determine if the student knows the answer or if he had only luck by guessing. In some cases there is even a high possibility that a student passes the exam by only choosing random answers. To prevent that, many teachers thought about different scoring schemes instead of just giving points for correct answers. Often used are for example minus points as a penalty for wrong answers. However, some of these scoring schemes, which have a theoretically low chance of letting a student pass by guessing, are from the perspective of the students unfair and too strict or even worse such a scheme could violate the local law. Therefore, in general a research and a comparison of different scoring schemes according to the kind of test as well as its application context is needed to find a good balance between having a small chance to pass the exam by guessing and creating a legal and fair test for students.
In the full paper we want to introduce a new HTML5 based web tool for the comparison of Multiple-Choice Scoring schemes, which can be used for creating appropriate MC-tests. The tool provides a variety of scoring schemes and different types of MC-tasks like single-choice and multiple-response as well as certain variants of those task types (e.g. variable number of items per question, support of answer justification etc.). A visualization of the currently selected MC-task type helps to select the correct type and according parameters as base for the statistical simulation. The output consists beside of a function graph of the selected scoring scheme, a statistics diagram and point tables also of the calculated value of the chance of a student to pass the parameterized exam by guessing. These outputs are meant to help understanding the influence of specific parameters of a certain MC-test and to build fair exams. In comparison with on older tool earlier presented at INTED the new one has a revised interface - but much more important are the better possibilities for a more detailed modelling of the underlying MC-task type which allow more realistic simulation results.
Keywords:
Exam evaluation, appropriate calculation schemes, interactive exploration of scoring schemes, fairness of evaluation, modelling MC-tasks.