M. Baidada1, K. Mansouri2, F. Poirier3

2University of Hassan II Casablanca, Normale Supérieure de l’Enseignement Technique (MOROCCO)
3University of Bretagne-Sud (FRANCE)
The e-learning maintains specific features, as long as it is based on the use of new technologies. They can bring a lot of automation in the evaluation, and consequently help tutors in taking decisions concerning the levels of learners.

The reporting tools, in many existing e-learning platforms can then applied effectively, to bring out all the explicit and implicit aspects which fall within the learner’s behavior during the test, and must be taken into account to better adapt his or her level.

Our contribution aims at expressing these aspects as parameters which, for an implemented treatment, will lead to a level of algorithm reassessment, which will accompany the tutor in taking decisions.

Theories related to evaluation, as the item response theory (IRT), highlight the parameters defined a priori, such as the difficulty of the questions. Many other parameters can be defined a posteriori, such as:
• The elapsed time spent on an issue
• The number of attempts
• The number of wrong responses compared to the difficulty of the question
• The number of correct answers compared to the difficulty of the question

These criteria can give an assessment on the behavior of the student during an evaluation, and can give a better idea of his or her level.
For each of these parameters, a calculation can be proposed to explain how a final grade can be revised upwards or downwards.
Expressed as variables, these parameters will be involved in a global algorithm, which will schedule the necessary calculations for the revision of the rating evaluation.

This algorithm can be an important module integrated into the overall schemata of an e-learning system, whose reporting tools were content to collect information on the learner without giving feedback to be exploited by tutors to make pedagogical decisions.