DIGITAL LIBRARY
RETHINKING ASSESSMENT WITH AUTOMATIC ITEM GENERATION
Technische Universität Dresden (GERMANY)
About this paper:
Appears in: INTED2022 Proceedings
Publication year: 2022
Pages: 1176-1180
ISBN: 978-84-09-37758-9
ISSN: 2340-1079
doi: 10.21125/inted.2022.0357
Conference name: 16th International Technology, Education and Development Conference
Dates: 7-8 March, 2022
Location: Online Conference
Abstract:
In a traditional way of assessment, several steps are required:
A) Experts have to define learning goals or competencies in a subject area.
B) They write items in which learners may be able to apply what they have learnt.
C) The test is executed and students are invited to handle the items.
D) The experts score the test items and the results of the assessment are announced. However, especially if the assessment is periodic as in schools or universities, the traditional way of assessment is highly time and resource consuming.

With Automatic Item Generation (AIG; Gierl, Lai, & Turner, 2012; Baum et al., 2021) the process of assessments differs significantly from this traditional way. In AIG, experts do not write single items; rather they build a highly structured representation of the subject area which can be called cognitive model. In the cognitive model, they define typical problems which they have to deal with in the subject area and information which are required to handle these problems. Afterwards, the experts create an item model which includes a meta-item that comprises all necessary information to deal with many of the problems. Some of this information is then randomly blanked out and software can be used to create a bunch of items by combining the cognitive and the item model. These items were then stored in an item bank which supports scoring the test.

In the AIG project, we created two different software products. The "AIG Model Generator" was created to support experts in creating the cognitive and the item model. The "AIG Item Generator" was designed to combine the two models and create the items automatically. The AIG Model Generator is a web application which allows the experts to design a cognitive model in a graphical representation. This model consists of so-called classes, nodes and edges. Nodes reference to low-level features, whereas classes represent the groupings of these features. Edges represent the connections between nodes and classes and can be grouped, which is used during the path-walking process to find coherent nodes or classes.

The second step in the editor is to define the item model. Possible question formulations are designed in a text editor and include blank spaces which are linked with classes and nodes of the cognitive model. The editor saves the models in XML and JSON representations to hand it over to the AIG Item Generator.

Both software products were evaluated. The model generator was evaluated with the System Usability Scale (SUS, Brooke, 1996). 12 experts were asked to create a cognitive model with the model generator based on a predefined scenario. They were then asked to describe problems with the modeling and answer the SUS. The results indicate that 8 of 12 experts were satisfied with their result and only one expert was not able to solve the given scenario. The problems this expert described were used to redesign the software in terms of ease of use. The item generator was evaluated by importing three different complexity levels of cognitive and item models. All models were easily combined by the software whereby multiple item sets were created.

References:
[1] J. Brooke (1996). SUS-A quick and dirty usability scale. Usability evaluation in industry,189/194.
[2] M.J. Gierl, H. Lai, S.R. Turner (2012). Using automatic item generation to create multiple‐choice test items. Medical Education, vol. 46, no.8.
[3] H. Baum et.al. (2021) A shift in automatic item generation towards more complex tasks. INTED2021.
Keywords:
Automatic Item Generation, Assessment, Cognitive Model.