Miguel Hernandez University (SPAIN)
About this paper:
Appears in: INTED2013 Proceedings
Publication year: 2013
Pages: 3097-3106
ISBN: 978-84-616-2661-8
ISSN: 2340-1079
Conference name: 7th International Technology, Education and Development Conference
Dates: 4-5 March, 2013
Location: Valencia, Spain
In this work we describe a new educational software we have developed to be used in a robotics and computer vision subject.

When a mobile robot has to carry out a task in an environment, a map is usually needed so that the robot can estimate its position and orientation and navigate to the target points. In this subject, students have to learn the different strategies that can be used to achieve these aims, using as input information the images that a camera which is mounted on the robot provides.

Among these strategies, the appearance-based approach has attracted the interest of researchers recently due to its simplicity and robustness. In these methods, the scenes are stored without any feature extraction, and recognition is achieved based on the matching of the whole images. Usually, omnidirectional vision systems are used due to their low cost and the richness of the information they provide. Taking these concepts into account, the approach consists of two steps.

1. First, the robot goes through the unknown environment while it takes some images from several points of view with known coordinates. As images are high dimensional data, it is crucial to extract the most relevant information before storing them. Several techniques may be used with this aim. In this educational software, we have implemented some of the most important techniques: Fourier-based approaches, PCA-based approaches and gradient-based approaches. So, the student will be able to test them and he/she can compare in which situations each of the approaches works better.

2. Once the map is finished, the robot will be able to compute its location within it. To do it, the robot has to acquire a new image, compress it and compare it with the data stored in the map. As a result, the current position and orientation of the robot should be computed. With this software, the student can test different techniques to perform localization.

We have included some databases of images captured in several real indoor environments, under realistic lighting conditions, so that the student can test the mapping algorithms. Also, some intermediate images are included so that he/she can test the localization algorithms.

The tool is fully interactive. Both the mapping and the localization processes are fully configurable, so that student can test how a correct tuning of all the parameters is very important to obtain acceptable results.

We are aware of the fact that sometimes, students get lost in the classroom as the algorithms they study can be quite difficult to understand. That is the reason why we have developed this interactive tool. We expect it is useful for the students to fully understand the appearance-based methods and other basic computer vision and robotics concepts. As it provides real data, we expect they learn how to face the problems that could appear in a realistic application and to improve and design new algorithms.