About this paper

Appears in:
Pages: 4023-4032
Publication year: 2014
ISBN: 978-84-616-8412-0
ISSN: 2340-1079

Conference name: 8th International Technology, Education and Development Conference
Dates: 10-12 March, 2014
Location: Valencia, Spain

MC MONITORING: AUTOMATED EVALUATION OF MULTIPLE CHOICE EXAMS AND TEST ITEMS

M. Nettekoven, K. Ledermüller

WU Vienna University of Economics and Business (AUSTRIA)
Multiple choice exams are widely used in educational institutions, and normally strongly linked with an e-learning preparation phase and sometimes more recently embedded into an e-assessment environment. Besides many advantages (and some disadvantages), MC exams offer the possibility to be easily analyzed by various statistical methods, ranging from basic descriptive statistics to sophisticated models, e.g. the Item Response Theory (IRT) with the Rasch model and its various extensions as the most prominent representatives.

However, most lecturers using multiple choice exams are either unfamiliar with these IRT models, or do not have the necessary statistical knowledge, time or inclination to apply them themselves to receive feedback about the inner workings of multiple choice tests.

We developed a tool to automatically analyze multiple choice exams and their embedded test items, using several statistical methods like the mentioned Rasch Model and its extensions for polytomous item response categories as well as more commonly known methods like descriptive statistics, hierarchical clustering, factor analysis or analysis of variance. The automatically generated report lists the respective key figures of the various analyses, together with detailed explanations to help non-statisticians to easily interpret the results.

We implemented this tool at our university’s exam server that administers and evaluates all multiple choice exams, where it is executed whenever new exam data are uploaded. Next to the exam’s results, lecturers receive a detailed statistical analysis report of the exam, which helps them to advance the overall quality of their exams.

A first feedback session has given very positive responses so far: lecturers appreciate the standardized summary of the exam’s key figures, as well as the easy means to monitor the quality of their multiple choice exams, and the hints how they could improve their test items, very much.
@InProceedings{NETTEKOVEN2014MCM,
author = {Nettekoven, M. and Lederm{\"{u}}ller, K.},
title = {MC MONITORING: AUTOMATED EVALUATION OF MULTIPLE CHOICE EXAMS AND TEST ITEMS},
series = {8th International Technology, Education and Development Conference},
booktitle = {INTED2014 Proceedings},
isbn = {978-84-616-8412-0},
issn = {2340-1079},
publisher = {IATED},
location = {Valencia, Spain},
month = {10-12 March, 2014},
year = {2014},
pages = {4023-4032}}
TY - CONF
AU - M. Nettekoven AU - K. Ledermüller
TI - MC MONITORING: AUTOMATED EVALUATION OF MULTIPLE CHOICE EXAMS AND TEST ITEMS
SN - 978-84-616-8412-0/2340-1079
PY - 2014
Y1 - 10-12 March, 2014
CI - Valencia, Spain
JO - 8th International Technology, Education and Development Conference
JA - INTED2014 Proceedings
SP - 4023
EP - 4032
ER -
M. Nettekoven, K. Ledermüller (2014) MC MONITORING: AUTOMATED EVALUATION OF MULTIPLE CHOICE EXAMS AND TEST ITEMS, INTED2014 Proceedings, pp. 4023-4032.
User:
Pass: