DIGITAL LIBRARY
A RECOMMENDATION SERVICE FOR SUPPORTING EVALUATORS OF ADAPTIVE EDUCATIONAL SYSTEMS IN THIRD LEVEL INSTITUTIONS
1 Dublin City University (IRELAND)
2 Trinity College Dublin (IRELAND)
About this paper:
Appears in: EDULEARN16 Proceedings
Publication year: 2016
Pages: 467-476
ISBN: 978-84-608-8860-4
ISSN: 2340-1117
doi: 10.21125/edulearn.2016.0109
Conference name: 8th International Conference on Education and New Learning Technologies
Dates: 4-6 July, 2016
Location: Barcelona, Spain
Abstract:
This paper presents an automated hybrid (case-based and knowledge-based) recommendation service to support novice and expert evaluators of adaptive educational systems in third level institutions. In addition the results of user evaluations conducted are presented. During this evaluations both novice and expert evaluators were provided with an online prototype of the recommendation service. The evaluators were then given structured usability questions to answer after interacting with the service. The aim of these evaluations was to find out if the evaluators were able to effectively identify appropriate evaluation techniques, whether they perceived the service to be useful and learned. Furthermore a review of evaluation techniques used when evaluating adaptive educational systems was also conducted and the results presented.

During evaluation of the recommendation service, the participants were divided into two user groups:
(a) Novice Evaluators (User Group 1): The goal of this user evaluation was to find out if the novice evaluators after interacting with the hybrid recommender system were able to effectively identify appropriate evaluation techniques. In addition the experiment aimed at finding out perceived usability and learnability (i.e. were the evaluators able to learn after interacting with the recommender system). The two objectives formulated for this user group were:

Objective 1: Identification of Appropriate Techniques:
The first objective aimed at finding out whether the recommender identified appropriate evaluation techniques:
• The evaluation methods, criteria and metrics to be used when evaluating an adaptive system.
• The evaluation approaches to be used when evaluating adaptive systems.

In addition, the experiment aimed at finding out what features (i.e. characteristics) of the recommended evaluation techniques did the novice evaluators find most useful about the recommender system.

Objective 2: Usability (User satisfaction and Learnability)
In terms of user satisfaction, the benefit to users lies in the perceived usability of the various recommendation functions provided by the hybrid recommender system. User satisfaction is typically measured through usability questionnaires after completing given tasks with a system.

This experiment objective aimed at finding out:
• Were the novice evaluators satisfied after interacting with the recommender system?
• Were novice evaluators satisfied (and able to learn) after interacting with the recommender system?

(b) Expert Evaluators (User Group 2):
Expert evaluators were asked to evaluate the method choice for an adaptive systems evaluation, as such, is the appropriateness of explanation of the technique. The experiment Setup for this user group was; in order to participate in this evaluation, users were required to be familiar with adaptive systems or the evaluation of such systems and to have 3+ years of experience. A total of 60 expert evaluators participated in the online and structured interview-based study. Of these, 49 completed the full evaluation process. Participants were recruited from Trinity College Dublin, the UMAP (2011, 2012 and 2013) conference and the AH conference, DataTEL and Recommender Communities.

The results of these evaluations are discussed and presented in this paper.
Keywords:
Adaptive e-learning, Expert Evaluators, Novice Evaluators, Hybrid Recommendation Services.