Could not download file: This paper is available to authorised users only.


A. Mkrtchyan

State Engineering University of Armenia (ARMENIA)
In spite of the appealing characteristics of multiple-choice questions (MCQ) - assessment of knowledge using traditional MCQ is not objective and effective. Multiple choice questions are rarely considered a suitable substitute for traditional assessment of deep knowledge. Multiple choice questions are frequently used but there is no way to investigate non-functioning distractors in developed tests. This paper mentions advantages and disadvantages of traditional multiple choice questions and introduces a new way to investigate non-functioning distractors in developed tests based on information retrieval model. Unlike traditional statistical based analyzes above-mentioned approach allows test developers to analyze test quality before knowledge assessment process.

Multiple-choice questions are now a strongly preferred testing instrument across many higher education disciplines and business students in particular.There is no one ideal assessment process and so too, MCQ testing has its advantages and disadvantages.
Commonly raised objections to the use of MCQs:
• There is no tools or approaches how to investigate non-functioning distractors in developed tests beside statistical analyzes.
• With MCQs there is a possibility of guessing the correct answer, there are numerous methods to penalize students from guessing such as negative marking, more options to answers, giving partial marks to an answer very near to the correct answer.

Distractors play a vital role for the process of multiple-choice testing, in that good quality distractors ensure that the outcome of the tests provides more credible and objective picture of the knowledge of the testees involved. On the other hand, poor distractors would not contribute much to the accuracy of the assessment as obvious or too easy distractors will pose no challenge to the students and as a result, will not be able to distinguish high performing from low performing learners.
Despite the existing body of research that evaluates the optimal number of distractors in multiple-choice items, substantially less research has focused on examining non-functioning distractors in MCQs in general and no recent studies have specifically examined the frequency of non-functioning distractors in teacher-generated items. Items generated automatically are another painful topic, however for us there is no differences how they are generated.

The main reason for limiting the power of computer based solution in the knowledge assessment sphere is ambiguity (multi-variant) in the representation of test items (distractors). The reasons of ambiguity are synonyms, homonyms and polysemous vocabulary, idiomatic turns which are characteristics of any natural language. A similar problem exists in the development of information retrieval systems, which have to solve essentially the same issues as in the case of investigating and analyzing distractors in test items. However, information retrieval systems have a longer road of development compared with the e-learning systems. So it makes sense to investigate systems of information retrieval in order to explore the possibility of reusing concepts of information retrieval model and use it as a basis for analyzing test items’ distractors.