DIGITAL LIBRARY
COMBINED EVALUATION OF A VIRTUAL LEARNING ENVIRONMENT: USE OF QUALITATIVE METHODS AND LOG INTERPRETATION TO EVALUATE A COMPUTER MEDIATED LANGUAGE COURSE
Barcelona Media (SPAIN)
About this paper:
Appears in: EDULEARN09 Proceedings
Publication year: 2009
Pages: 5794-5804
ISBN: 978-84-612-9801-3
ISSN: 2340-1117
Conference name: 1st International Conference on Education and New Learning Technologies
Dates: 6-8 July, 2009
Location: Barcelona ,Spain
Abstract:
In this paper we are going to present an innovative frame for the evaluation of AutoLearn, a project funded by the EU under the Lifelong Learning Program. The data collected through this evaluation will allow us to discuss the convenience and suitability of the use of diverse methodologies to evaluate language learning platforms.

The evaluation is mainly focused in the usability of the platform but taking into account also some didactic and linguistic aspects concerned in the language learning process. With the aim of gathering all this data we are carrying out an evaluation combining exploratory methods, such as questionnaires and observations, and the analysis of the user navigation through the platform (logs). In these sense, the objective of our work is test the accuracy and usefulness of this combined evaluation framework in AutoLearn, and, subsequently, develop a methodology for the analysis and evaluation of computer mediated language leaning environments in general.

The empirical data collected in the evaluation come from real users in a real learning context. We have tested AutoLearn with seventy students from English language courses in a Catalan university. To collect the exploratory data we have used initial questionnaires to know the previous experiences and expectations of the users; final questionnaires to know the ease of use and the usefulness in learning English of the platform perceived by the users; and observations of the learning scenario. In addition, quantitative metrics has been gathering, mainly trough logs, the navigation in the main pages, the time spend in each exercise or number of clicks or frequency of use.

In the final paper we will present the analysis of different types of data and the relationships found in all of them. Preliminary results of our evaluation in this courses show that getting information from different sources is a good way to gather all the elements of the user experience when using the platform. Therefore, our first step would be to find how the information from these sources is intertwined and complement each other. For instance, the time spend in one exercise can be an indication of the difficulty in doing it. But, this difficulty can be because the language level is not suited to the level of the students (linguistic or didactic aspects) or because the application has a bad ease of use (usability aspects). With the information gather from the questionnaires and observations we will be able to find the appropriate explanation of this kind of questions.

In a second step of our evaluation we will use this methodology in all the AutoLearn courses. In this step, we will also take into account that AutoLearn will be implemented in different learning contexts (not only in the university but in adult and secondary schools), so the evaluation must be able to cope with the different specificities. Besides, due to the fact that AutoLearn is a European project the evaluation will consider also the cultural differences of the countries in which the platform will be used.

Finally, we want to mention that the project has planned two evaluation phases. Therefore, after the first phase we will validate the evaluation framework and make improvements according to the results. The second phase will allow us to comprehend if the improvements performed in the evaluation framework really result in a better evaluation process.
Keywords:
vle evaluation, blended language learning, quantitative-qualitative methods, log.