DIGITAL LIBRARY
COGNITIVE ARCHITECTURE FOR EMBODIED CONVERSATIONAL AGENT: APPLICATION ON VIRTUAL LEARNING ENVIRONMENT
1 École Nationale D'Ingénieurs De Brest (ENIB) (FRANCE)
2 Arts, Sciences & Technology University in Lebanon (LEBANON)
About this paper:
Appears in: EDULEARN16 Proceedings
Publication year: 2016
Pages: 4973-4983
ISBN: 978-84-608-8860-4
ISSN: 2340-1117
doi: 10.21125/edulearn.2016.2179
Conference name: 8th International Conference on Education and New Learning Technologies
Dates: 4-6 July, 2016
Location: Barcelona, Spain
Abstract:
This work deals with Virtual Learning Environments (VLE) where Virtual Reality (VR) is applied to learning applications. VLE are used to acquire different kinds of competencies:
1) Technical gesture,
2) Declarative knowledge and
3) Procedural knowledge.

The interest of VR for learning is that the context is simulated, and that the user’s body is also involved in the learning task through natural interaction. Successful immersion of these scenarios in VR environments permits learners to interact in real time with virtual objects defined in these environments in order to perform the required activities. Learners are allowed to execute the scenarios repeatedly to gain more experience and to try different solutions. Nevertheless, building those VLE is still expensive and time consuming since computer scientists often integrate their pedagogical vision in the implemented scenario without involving the domain experts and the teachers in the conception of VLE. However, these experts are the ones capable of providing accurate pedagogical and domain knowledge about the scenario. We propose a method based on a UML (Unified Modeling Language) meta-model to represent the environment, the actions of the user and the pedagogical scenarios. The model is called MASCARET (Multi-Agent System for Collaborative, Adaptive and Realistic Environments for Training).

Moreover, the learner would like to naturally interact with a virtual agent representing the tutor. Typically, the learner would like or need to interrupt the tutor and direct additional questions to gain more information about the actions to be performed or the objects to be used.

Hence, the tutor agents should have two critical features:
(1) intelligent and sound replies based on a rational reasoning, and
(2) a human-like credible behaviors and interactions with the human learners to better communicate their replies and intentions.

In this article we propose a model to build a generic Embodied Conversational Agent (ECA) with Intelligent Tutoring System (ITS) capabilities in VLE applied on MASCARET. MASCARET is used to define the knowledge base of the agents about the systems where users carry out activities considered as procedures defined by domain experts.

Based on a BDI like cognitive architecture, the desires of the tutor agent are stemmed from its need to follow and abide to the defined procedural scenario. Upon learner’s interruptions, after reasoning on all learner’s actions and following the procedural scenario, tutor behaviors and communication behaviors are also selected to reply accordingly to the learner. Expressing these intentions through concrete communication actions is achieved in our work by integrating SAIBA (Situation, Agent, Intention, Behavior and Animation) framework. After building realistic communicative intentions, Function Markup Language (FML) is used by the Intention Planner in SAIBA to represent these intentions and specify the domain specific issues including performative actions and updated contextual information of world objects and emotions. This agent is materialized in front of the learner as an ECA that communicates and interacts with the human learner in a credible human-like manner.

Our implemented model is then verified in a concrete pedagogical scenario for learning blood analysis procedures in a biomedical laboratory.
Keywords:
Virtual Learning Environment, Cognitive Architecture, Embodied Conversational Agent, Intelligent Tutoring System, Pedagogical Scenario, Knowledgebase.