Could not download file: This paper is available to authorised users only.


G.B. Ronsivalle1, S. Carta2, V. Metus2, M. Orlando2

1Università degli Studi di Verona (ITALY)
The integration of Artificial Intelligence in the evaluation field constitutes a strategic, concrete and effective opportunity for public and private organizations (universities, companies, armed forces, public administration offices), who usually deal with the certification of competences of large numbers of people (students, employees, suppliers).

This is true for three main reasons:
(1) the use of nonlinear algorithms allows us to describe the learners’ mental models in a new way that is more consistent with the innate complexity of knowledge systems;
(2) the implementation of automatic or semi-automatic assessment procedures results in a more precise and objective evaluation and reduces the risk of bias and subjectivity usually related to the evaluation practice;
(3) the rationalization of the evaluation process through a computer protocol improves the effectiveness and efficiency of the observation, verification and analysis of learning outcomes.

Indeed, from a methodological point of view, the effectiveness of a tool for the evaluation of learners’ knowledge depends on two concurrent factors: its validity and its reliability.

There are two more conditions:
(a) its adequacy in representing knowledge “systems”, and
(b) the possibility to customize and optimize the test administration according to each learner’s answers.

These additional conditions require the evaluator to be able to administer the right questions at the right time and to stop the test when reaching a reasonable certainty on the student’s level of competence.
Unfortunately, the synthesis of all these requirements is difficult to reach by means of the traditional tools. In addition, it often clashes with both the operational and economic needs characterizing most of the training systems, and the disturbance deriving from the “human” component and its inevitable mistakes.

Therefore, in order to neutralize all the variables in the background, make the administration phase more effective and manage the complexity of the entire evaluation process, the authors propose to “translate” the evaluator’s competences into an “intelligent” software.

This software consists of nonlinear algorithms performing three main tasks:
(1) describe the learners’ level of competence in a nonlinear reticulated way;
(2) automatically select and administer the right question at the right time, according to the learners’ performance;
(3) optimize the test duration and “decide” when to stop it in order to increase the level of efficiency and sustainability of the evaluation.

How can a software achieve this goal? Its “mind” is an MLP (Multi Layer Perceptron) Artificial Neural Network (ANN) with two layers. The inputs correspond to the learner’s answers to the items of the test and the outputs correspond to the areas of the learner’s actual knowledge. The ANN is trained to generate outputs mirroring the inputs, so that the neuronal engine can infer the state of the learner’s level of competence even with incomplete information. Indeed, thanks to its synaptic weights configuration, the ANN gradually processes the learner’s answers, selects the content areas where it needs to collect more data and identifies the right questions to increase the level of information. Finally, it stops the test when it has sufficient data to complete the map of the learner’s knowledge system.

This neural software was tested by the authors in the assessment of a large sample of students of the Education Technology courses.