Miguel Hernandez University (SPAIN)
About this paper:
Appears in: ICERI2009 Proceedings
Publication year: 2009
Pages: 48-57
ISBN: 978-84-613-2953-3
ISSN: 2340-1095
Conference name: 2nd International Conference of Education, Research and Innovation
Dates: 16-18 November, 2009
Location: Madrid, Spain
When a robot has to carry out a task in an environment, a map is usually needed so that the robot can estimate its position and orientation and navigate to the target points. This map can be build using the images taken by a vision system. In this kind of applications, the appearance-based approach has attracted the interest of researchers recently due to its simplicity and robustness. In these methods, the scenes are stored without any feature extraction, and recognition is achieved based on the matching of the whole images. Usually, omnidirectional vision systems are used due to their low cost and the richness of the information they provide. The approach consists of two phases:

a) Map building. The robot goes through the environment to map, and takes some images from several points of view. As images are high dimensional data, a compression phase is usually needed to extract the most relevant information from each image. Several algorithms can be used with this aim, such as PCA (Principal Components Analysis) or DFT (Discrete Fourier Transform). After this phase, the map, consisting of a data vector for every location, is built.

b) Localization. When the robot has to carry out a task in the environment, it has to compute its location in the map. To do it, the robot has to acquire a new image, compress it and compare it with the data stored in the map. As a result, the location and orientation of the robot must be computed.

This work presents a software tool we have built to be used in a robotics and computer vision subject. With this tool, the students can fully understand the appearance-based approach in robotics mapping, with the next features:
- A database with panoramic images (both grey-scale and colour) of an environment is included. With these images, the student can build a map. Also, some test images are included so that the student can compute the location and orientation of the robot within the map when it captured these test images.
- The student can select different channels from the images to build the database (RGB, HSV, HLS, etc.).
- Some methods to compress the information of the images are implemented so that the student can test their performance and the cases each one works better (DFT and PCA).
- The student can decide the amount of information he wants to retain from each image.
- The tool is fully interactive. Once the map building and the localization steps are finished, several graphical representations of the data can be carried out to know the degree of accuracy of the method used.
- Some optional filters, fully configurable, have been included in the tool. Thanks to them, the student can make the map more robust against illumination variation and changes in the position of some objects of the environment.

This tool has demonstrated to be very useful for the students to fully understand the appearance-based methods and other basic computer vision concepts. The students distinguish the different compression methods and the different parameters to be configured so that they work correctly. They learn the different colour representations of an image, the use of omnidirectional vision and the accuracy in map building and localization. Also, they understand the problem of illumination variation and study some strategies to avoid it. Once the practical sessions have been completed, the student is capable to develop more complex algorithms to control the movements of a robot using this approach.
computer vision, mobile robots, educational software, higher education.