Miguel Hernández University (SPAIN)
About this paper:
Appears in: INTED2018 Proceedings
Publication year: 2018
Pages: 1232-1241
ISBN: 978-84-697-9480-7
ISSN: 2340-1079
doi: 10.21125/inted.2018.0187
Conference name: 12th International Technology, Education and Development Conference
Dates: 5-7 March, 2018
Location: Valencia, Spain
Nowadays, the use of mobile robots has increased substantially and we can find them in many environments, solving a wide range of tasks. When a mobile robot has to carry out a task autonomously in an unknown environment, it has to carry out two fundamental steps. On the one hand, it has to generate a model of the environment (namely, a map) and on the other hand it must be able to use this map to estimate its current pose (position and orientation). The robot can extract the necessary information from the unknown environment using the different sensors that it may be equipped with. This information is compared with the map data to estimate the pose of the robot. Several kinds of sensors can be used with this aim, such as laser, touch or vision sensors.
Recently, the use of vision sensors has become a very common tool to solve both the mapping and localization tasks, thanks to the great quantity of information they offer with respect to their relatively low cost. Also, the use of images makes it possible to carry out other high level tasks, such as people detection and recognition. However, since images are very high dimensional data, they have to be treated to extract relevant information. Several algorithms exist to carry out these tasks. However, they tend to be mathematically complex. Also, it is necessary to have a variety of environments to test and tune the algorithms. In the first stages of design, these environments should be simple and static, to try to test the algorithms under ideal conditions. The use of real environments in this initial stage is not advisable as they tend to change their appearance (eg. noise, occlusions, changes in lighting conditions, changes in the position of doors, objects, etc.) and it would introduce an uncontrolled level of uncertainty in the algorithms.
Taking into account these facts, we have developed a software tool that offers the students the possibility of generating easily virtual environments where they can simulate the movement of a virtual robot. Thanks to it, students can generate some sets of images captured under ideal conditions and test their algorithms using these images. This software tool has been designed to be used by the students of a Master in Robotics. In this Master, students learn how to design autonomous robots using computer vision to guide the robots. This way, the tool is useful in the first stages of design, to easily generate sets of images, extract the main information from the images and test the algorithms under ideal conditions.
According to our experience in this kind of topics, the use of real images complicates unnecessarily the first stages of the design of the algorithm and students usually get lost in this point. We expect that this tool helps them to understand better the test process and to focus on the design and tuning of the algorithms.
Mapping, robot localization, virtual environment, simulation, mobile robotics.