TOWARDS FACILITATING LEARNING AND IMPROVING EDUCATION WITH TIAGO ROBOT

M. Dragoi1, I. Mocanu2, O. Cramariuc1, B. Cramariuc1

1Centrul IT pentru Stiinta si Tehnologie, Bucharest (ROMANIA)
2University Politehnica of Bucharest (ROMANIA)
Recent developments in robotics and machine learning makes robot assisted environments no longer seem like a far dream. We are currently witnessing their slow but seamless integration in everyday lives both at home and at school. Robots have a great potential in being employed as an educational technology [1,2]. They can be used to facilitate learning and improve educational performance of students in various fields such as physics and mathematics [3]. In these scenarios, the movement of the robot is used as a learning tool for basic concepts such as rotation, transformation, displacement, force, etc. In more advanced scenarios, the robot can become both the tool and the mentor of the subject to be taught [4].

While robots can take on a number of different roles in the learning process (passive as teaching aid, co-learner, mentor, etc) the main concern is to guarantee their safety around humans. For this, the actions of the robot must be socially acceptable and the results of the actions should be as close as possible to the desired outcome. Several key areas of interest in this aspect are movements, environment recognition and kinematics planning for the interaction with the environment.

The presented work focuses on robotic manipulation, mainly the automatic identification and evaluation of grasping positions for a set of common objects. The main protagonist in our scenarios is the TIAGo robot produced by the Spanish robot maker PAL Robotics. TIAGo is a mobile service robot with an extendable torso and a manipulator arm to grab tools and objects. Its sensor suite allows it to perform a wide range of perception, manipulation, and navigation tasks. Our implementation and experiments will be facilitated by TIAGo’s Robotic Operating System (ROS) environment and the Gazebo simulator.

Usually, the identification of grasping positions requires some sort of knowledge about the nature of the object, which humans naturally learn through experience. The geometrical information about the objects, as acquired through an RGB-D camera, and their material and physical properties such as friction coefficient and mass center are usually not enough. This aspect will be addressed with a supervised learning setup, with valid pre-computed grasping parameters. As neural networks hold the state-of-the-art in generalization from limited datasets, such methods are used and tested to extrapolate knowledge from human-annotated data to novel objects.

References:
[1] T. Balch, J. Summet, D. Blank, D. Kumar, et al., Designing personal robots for education: hardware, software, and curriculum, IEEE, Pervasive Computing, 7(2), 2008, 5–9.
[2] O. Mubin, C. J. Stevens, S. Shahid, A. A. Mahmud, and J.-J. Dong, A Review of the Applicability of Robots in Education, Technology for Education and Learning, 1(1), 2013.
[3] K. Highfield, J. Mulligan, and J. Hedberg, Early mathematics learning through exploration with programable toys, Proc. Joint Conference Psychology and Mathematics, 2008, 17–21.
[4] J. Han and D. Kim, R-Learning services for elementary school students with a teaching assistant robot, Proc. HRI, 2009, 255–256.