Speaker: Robert Sim, University of British Columbia
Date: Tuesday, November 22 2005
Autonomous mobile robot systems have an important role to play in a wide variety of application domains. A key component for autonomy is the capability to explore an unknown environment and construct a representation that a robotic agent can use to localize, navigate, and reason about the world. In this talk I will present results on the automatic construction of visual representations. First, the Visual Map representation will be introduced as a method for modelling the visual structure of the world. Second, I will present a flexible architecture for robust real-time vision-based mapping of an unknown environment. Finally, I will conclude with a discussion of recent progress on the problem of autonomous robotic exploration, and illustrate issues in the problem of developing robotic explorers that are naturally curious about their environment.
The Visual Map framework is an approach to representing the visual world that enables a robot to learn models of the salient visual features of an environment. A key component of this representation is the ability to learn mappings between camera pose and image-domain features without imposing a priori assumptions about the structure of the environment, or the optical characteristics of the visual sensor. These mappings can be employed as generative models in a Bayesian framework for solving the robot localization problem, as well as for visual servoing and path planning.
The second part of this talk demonstrates an architecture for performing simultaneous localization and mapping with vision. The main goals of our work are to facilitate robust large-scale mapping in real time using vision. We employ a Rao-Blackwellised particle filter for managing uncertainty and examine a variety of robust proposal distributions, as well as the run-time and scaling characteristics of our architecture.
The latter part of this builds on representation and mapping to address robotic exploration. In order to acquire a representation of the world, a robot must first acquire data. From an information-theoretic point of view, this problem involves moving through the world so as to maximize the information that can be gained
from what is observed along the robot's trajectory. However, computing the optimal trajectory is complicated by several factors, including the presence of noise, the time horizon over which the robot plans, the specific objective function that is optimized, and the robot's choice of sensor. I will present several results in this area that lead to the development of robust robotic systems that can plan over the long term and successfully demonstrate an emergent sense of curiosity.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.