DIGITAL LIBRARY
CLOSED-QUESTION AND OPEN-PROBLEM ASSESSMENT IN UNDERGRADUATE ELECTRONIC ENGINEERING COURSES – A CASE STUDY
Warsaw University of Technology (POLAND)
About this paper:
Appears in: ICERI2016 Proceedings
Publication year: 2016
Pages: 8133-8142
ISBN: 978-84-617-5895-1
ISSN: 2340-1095
doi: 10.21125/iceri.2016.0862
Conference name: 9th annual International Conference of Education, Research and Innovation
Dates: 14-16 November, 2016
Location: Seville, Spain
Abstract:
The paper shares our experience in the assessment of student learning, using as a case study courses in the principles of circuit theory held at the Faculty of Electronics and Information Technology of the Warsaw University of Technology in the 2nd semester of the 1st year. The results presented here are based on about 2400 exam papers scored, with >1000 multiple-choice test questions and >140 open-problem test exercises, covering the period since 2009.

The final exam for these courses consists of two parts, which last 75 minutes each. The first part contains 15 multiple-choice questions (with a single correct answer and three distractors), with possible correct scores of 1 or 2 points and giving a maximum score of 25 points. This part is taken using printed test sheets, as the number of students (up to 150 at one time) exceeds the capacity of the computer-based classrooms at our faculty. The second part of the exam consists of two larger multi-concept open problems, scored out of 12 and 13 points respectively. The total exam score is then added to the scores obtained during the semester (for tests and laboratories). Gaining 50% of the maximum possible total for the semester (exam and course work) is sufficient to pass. Students have up to three attempts to achieve the needed exam score.

The multiple-choice part of the exam is computer assisted for both preparation and scoring. First, the questions are prepared in the form of a YAML database. New questions are created on an ongoing basis, as the >1000-item size of our database is far too small and could easily be memorized. The content is written using LaTeX, which allows us to easily incorporate mathematical formulas and graphics both in the questions and the answers. Second, a computer program written in Ruby and utilizing the ‘exam’ document class for LaTeX, creates a PDF file containing test sheets. The sheets are randomized with respect to the order of questions and answers, and as to the answers themselves (for most items the database contains an excess of correct answers and distractors to choose from) to make cheating difficult. The unique randomization key is printed on each test sheet. After the test, the students’ answers are manually supplied to another Ruby program, which computes the scores and uploads them to the faculty server. The process of data entry and checking takes about 1 minute per student. For comparison, scoring the two open-problem questions usually takes 15-20 minutes per student.

As checking the open-problem part of the exam requires much more time and effort than scoring the multiple-choice part, the question arises if the latter could substitute for the former and suffice as the only assessment tool. Our experience shows that the answer to this is negative. The correlation coefficient between the multiple-choice and open-problem test questions equals 0.3 (with the p-value <<0.0001). Even removing items characterized by the difficulty coefficient <0.1 or >0.9 as well as those that have an item-test correlation coefficient of <0.2 does not change this low correlation between closed and open problems. This is not a surprise, as they were designed to test completely different student abilities (lower-order skills and higher-order reasoning respectively). It should also be noted that even after removing poorly-behaving items, the test reliability coefficient is low (up to 0.7). This could be attributed to the fact that the questions cover many distinct areas.