Authors: Yohei Kakiuchi and Ryohei Ueda and Kei Okada and Masayuki Inaba
Abstract— A humanoid robot working in a household environment with people needs to localize and continuously update the locations of obstacles and manipulable objects. Achieving such system, requires strong perception method to efficiently update the frequently changing environment.
We propose a method for mapping a household environment using multiple stereo and depth cameras located on the humanoid head and the environment. The method relies on colored 3D point cloud data computed from the sensors. We achieve robot localization by matching the point clouds from the robot sensor data directly with the environment sensor data. Object detection is performed using Iterative Closest Point (ICP) with a database of known point cloud models. In order to guarantee accurate object detection results, objects are only detected within the robot sensor data. Furthermore, we utilize the environment sensor data to map out of the obstacles as bounding convex hulls.
We show experimental results creating a household environment map with known object labels and estimate the robot position in this map.
[link]
This Blog is maintained by the Robot Perception and Learning lab at CSIE, NTU, Taiwan. Our scientific interests are driven by the desire to build intelligent robots and computers, which are capable of servicing people more efficiently than equivalent manned systems in a wide variety of dynamic and unstructured environments.
Monday, February 27, 2012
Thursday, February 16, 2012
Lab meeting Feb 22(Chih Chung): Motion planning in urban environments (Journal of Field Robotics 2008)
Author: Dave Ferguson, Thomas M. Howard and Maxim Likhachev
Abstract
We present the motion planning framework for an autonomous vehicle navigating through urban environments. Such environments present a number of motion planning challenges, including ultrareliability, high-speed operation, complex intervehicle interaction, parking in large unstructured lots, and constrained maneuvers. Our approach combines a model-predictive trajectory generation algorithm for computing dynamically feasible actions with two higher level planners for generating long-range plans in both on-road and unstructured areas of the environment. In the first part of this article, we describe the underlying trajectory generator and the on-road planning component of this system. We then describe the unstructured planning component of this system used for navigating through parking lots and recovering from anomalous on-road scenarios. Throughout, we provide examples and results from “Boss” an autonomous sport utility vehicle that has driven itself over 3,000 km and competed in, and won, the DARPA Urban Challenge.
[LINK]
Abstract
We present the motion planning framework for an autonomous vehicle navigating through urban environments. Such environments present a number of motion planning challenges, including ultrareliability, high-speed operation, complex intervehicle interaction, parking in large unstructured lots, and constrained maneuvers. Our approach combines a model-predictive trajectory generation algorithm for computing dynamically feasible actions with two higher level planners for generating long-range plans in both on-road and unstructured areas of the environment. In the first part of this article, we describe the underlying trajectory generator and the on-road planning component of this system. We then describe the unstructured planning component of this system used for navigating through parking lots and recovering from anomalous on-road scenarios. Throughout, we provide examples and results from “Boss” an autonomous sport utility vehicle that has driven itself over 3,000 km and competed in, and won, the DARPA Urban Challenge.
[LINK]
Subscribe to:
Posts (Atom)