HOW INTRODUCING ARTIFICIAL INTELLIGENT BEHAVIOURS IN EDUCATIONAL ROBOTICS
Artificial Intelligence is a well-known advanced topic with a broad variety of application fields. Among others robotics is one of the most promising both for practical effectiveness and future developments. Moreover robotics is having a new golden age for an increasing interest of the general public, particularly attracted by the apparent ‘intelligent’ and reactive behaviour of robots, and the convinced expectation that in less than 20 years robots will be the new basic interfaces between humans and computing resources. Considering the educative potential, already today robots are particularly interesting because they can promote the acquisition of technological and scientific competences at different levels as multidisciplinary didactical platforms. Therefore educational robotics is a new subject where teachers and researchers are involved in providing methodological and practical frameworks to develop robotic-enhanced project-based activities to support in a new way the teaching/learning of scientific and non-scientific disciplines. The paper deals with an experience of applying artificial intelligence to a mini-robot in order to recognize handwritten commands using a very simple sensoring device. This work was made in the framework of European TERECoP project (www.terecop.eu) aimed to define a curriculum for teacher training on educational robotics and to provide a repository of robotic-enhanced experiences. The robotic architecture chosen is the Lego Mindstorms NXT and the language of development of the experience is the open-software NXC (Not eXactly C), a textual language suitable for the level of complexity of the proposed solution. Commands are given on a small stripe of paper in form of a handwritten sequence of black squares with possible intermediate empty square. Each defined command is associated with a simple motion of the robot (forward, backward or rotate a while) during a (run-time) training phase. The ‘controlling software’ is made of a Hopfield Neural Network implemented within the robot using its standard firmware. Instead of representing stripped commands with tables directly derived from their geometrical characteristics, we need a robust internal representation to cope with the unavoidable uncertainty of the handwritten code, even in the case of commands reproduced more times by the same person. In some way this is a simplified version of the OCR (Optical Character Recognition) problem: the neural network can more suitably recognise such commands like a human does. The paper is divided in 4 sections. Section 1 introduces some methodological issues regarding educational robotics and specifically what kind of problems suggest an AI approach and the motivations of interest of this approach. Section 2, after a brief overview of the hardware/software used in the experiment, describes the general problem of the handwritten recognition and the choice made by the authors to simplify the problem to an affordable one. Section 3 recalls some features of the Neural Network paradigm and, analysing the restrictions of the NXT firmware support, motivates the choice of the Hopfield model instead of a more general back-propagation one, and describes the implementation. The final section 4 discusses the obtained results and how the experience can be used as a general approach for similar AI solutions, together with some remarks on its implicit educative value and its prospections of future robots’ use.