DIGITAL LIBRARY
FACILITATING EVALUATION SUPPORT FOR SERIOUS GAMES
Graz University of Technology (AUSTRIA)
About this paper:
Appears in: INTED2017 Proceedings
Publication year: 2017
Pages: 5887-5896
ISBN: 978-84-617-8491-2
ISSN: 2340-1079
doi: 10.21125/inted.2017.1375
Conference name: 11th International Technology, Education and Development Conference
Dates: 6-8 March, 2017
Location: Valencia, Spain
Abstract:
Digital learning games represent an e-learning technology that is increasingly recognized by educational practitioners. With their highly engaging and motivating character games constitute effective educational tools for creating authentic learning tasks and meaningful, situated. Psychological research has targeted many aspects of leisure and educational games, such as engagement, challenge, motivation, and achievement. Typically a wide range of psychological constructs are used and related with game design elements. For example, knowledge, goals, encouragements are connected with game design elements. Another example is the flow concept that describes a situation when people are highly engaged and lose track of time, which often happens, when competence development and game challenges are balanced (e.g. the game should not demand too much or too little from the player).

In order to assess and guarantee that serious games have such qualities, evaluation studies conducted. Evaluation is an important task, because it reveals relevant information about the quality of the technology for all stakeholders and decision makers. It involves collecting and analysing information about the users activities and the software characteristics and outcomes. Its purpose is to make judgments about the benefits of a technology, to improve its effectiveness, and/or to inform programming decisions. The evaluation process can be broken down into three key phases:
(1) Planning,
(2) Collecting, and
(3) Analysing.

In each phase there are a number of steps that are in general considered and carried out in an evaluation.

However, performing an evaluation in such a systematic manner is a time-consuming and complex task. For this reason, a software component has been developed that supports the systematic evaluation of serious games. This component captures game-based user data continuously and directly from the game. A data scheme is provided that structures user interactions and allows the game to send user data organised in categories and attributes that the software component can use for further processing. For the analysis of this data, evaluation measures are defined that assess the qualities to be measured (e.g. flow, engagement, etc.). Such measures are defined based on the evaluation data and allow the automatic analysis. A preliminary set of evaluation measures can be complemented with customised measures that the evaluator can define and store. Furthermore, automated reports and visualisations can be retrieved for the specified evaluation variables. The data can be exported for further analysis

This asset has added value for various stakeholders – developers, who get an instrument to easily evaluate their software, training providers, who get information about the pedagogical value of games, and the players, who are enabled to provide quick feedback without distraction.

The full paper will describe details, how the software component works, how evaluation data is structured, how evaluation measures are defined, and how reports are created. Moreover, concrete examples are given, with which games and how this software component is used.
Keywords:
Serious game, evaluation, evaluation tool.