Title: System-Level Performance Analysis for Bayesian Cooperative Positioning: From Global to Local
Authors: Zhang Siwei, Ronald Raulefs, Armin Dammann, and Stephan Sand
In: IEEE International Conference on Indoor Positioning and Indoor Navigation 2013
Abstract
Cooperative positioning (CP) can be used either to calibrate the accumulated error from inertial navigation or as a stand-alone navigation system. Though intensive research has been conducted on CP, there is a need to further investigate the joint impact from the system level on the accuracy. We derive a posterior Cramer-Rao bound (PCRB) considering both the physical layer (PHY) signal structure and the asynchronous latency from the multiple access control layer (MAC). The PCRB shows an immediate relationship between the theoretical accuracy limit and the effective factors, e.g. geometry, node dynamic, latency, signal structure, power, etc. which is useful to assess a cooperative system. However, for a large-scale decentralized cooperation network, calculating the PCRB becomes difficult due to the high state dimension and the absence of global information. We propose an equivalent ranging variance (ERV) scheme which projects the neighbor's positioning uncertainty to the distance measurement inaccuracy. With this, the effect from the interaction among the mobile terminals (MTs), e.g. measurement and communication can be decoupled. We use the ERV to derive a local PCRB (L-PCRB) which approximates the PCRB locally at each MT with low complexity. We further propose combining the ERV and L-PCRB together to improve the precision of the Bayesian localization algorithms. Simulation with an L-PCRB-aided distributed particle filter (DPF) in two typical cooperative positioning scenarios show a significant improvement comparing with the non-cooperative or standard DPF.
[Link]
This Blog is maintained by the Robot Perception and Learning lab at CSIE, NTU, Taiwan. Our scientific interests are driven by the desire to build intelligent robots and computers, which are capable of servicing people more efficiently than equivalent manned systems in a wide variety of dynamic and unstructured environments.
Wednesday, February 26, 2014
Thursday, February 20, 2014
Lab Meeting, February 20, 2014 (Jim): Learning monocular reactive uav control in cluttered natural environments
Title: Learning monocular reactive uav control in cluttered natural environments
Authors:
Stephane Ross, Narek Melik-Barkhudarov, Kumar Shaurya Shankar, Andreas Wendel, Debadeepta Dey, J. Andrew (Drew) Bagnell, and Martial Hebert
IEEE International Conference on Robotics and Automation, March, 2013.
Abstract:
... Unlike large vehicles, MAVs can only carry very light sensors, such as cameras, making autonomous navigation through obstacles much more challenging. In this paper, we describe a system that navigates a small quadrotor helicopter autonomously at low altitude through natural forest environments. Using only a single cheap camera to perceive the environment, we are able to maintain a constant velocity of up to 1.5m/s. Given a small set of human pilot demonstrations, we use recent state-of-theart imitation learning techniques to train a controller that can avoid trees by adapting the MAVs heading. We demonstrate the performance of our system in a more controlled environment indoors, and in real natural forest environments outdoors.
Link
Authors:
Stephane Ross, Narek Melik-Barkhudarov, Kumar Shaurya Shankar, Andreas Wendel, Debadeepta Dey, J. Andrew (Drew) Bagnell, and Martial Hebert
IEEE International Conference on Robotics and Automation, March, 2013.
Abstract:
... Unlike large vehicles, MAVs can only carry very light sensors, such as cameras, making autonomous navigation through obstacles much more challenging. In this paper, we describe a system that navigates a small quadrotor helicopter autonomously at low altitude through natural forest environments. Using only a single cheap camera to perceive the environment, we are able to maintain a constant velocity of up to 1.5m/s. Given a small set of human pilot demonstrations, we use recent state-of-theart imitation learning techniques to train a controller that can avoid trees by adapting the MAVs heading. We demonstrate the performance of our system in a more controlled environment indoors, and in real natural forest environments outdoors.
Link
Tuesday, February 11, 2014
Lab Meeting, February 13, 2014(Hung-Chih Lu): Zhaoyin Jiay, Andrew Gallaghery, Ashutosh Saxena "3D-Based Reasoning with Blocks, Support, and Stability." CVPR 2013
Title:
3D-Based Reasoning with Blocks, Support, and Stability
Author:
Zhaoyin Jiay, Andrew Gallaghery, Ashutosh Saxena.
Abstract:
3D volumetric reasoning is important for truly understanding a scene. Humans are able to both segment each
object in an image, and perceive a rich 3D interpretation of the scene, e.g., the space an object occupies, which objects support other objects, and which objects would, if moved, cause other objects to fall. We propose a new approach for parsing RGB-D images using 3D block units for volumetric reasoning. The algorithm fits image segments with 3D blocks, and iteratively evaluates the scene based on block interaction properties. We produce a 3D representation of the scene based on jointly optimizing over segmentations,
block fitting, supporting relations, and object stability. Our algorithm incorporates the intuition that a good 3D representation of the scene is the one that fits the data well, and is a stable, self-supporting (i.e., one that does not topple) arrangement of objects. We experiment on several datasets including controlled and real indoor scenarios. Results show that our stability-reasoning framework improves RGB-D segmentation and scene volumetric representation.
From
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013
Link
object in an image, and perceive a rich 3D interpretation of the scene, e.g., the space an object occupies, which objects support other objects, and which objects would, if moved, cause other objects to fall. We propose a new approach for parsing RGB-D images using 3D block units for volumetric reasoning. The algorithm fits image segments with 3D blocks, and iteratively evaluates the scene based on block interaction properties. We produce a 3D representation of the scene based on jointly optimizing over segmentations,
block fitting, supporting relations, and object stability. Our algorithm incorporates the intuition that a good 3D representation of the scene is the one that fits the data well, and is a stable, self-supporting (i.e., one that does not topple) arrangement of objects. We experiment on several datasets including controlled and real indoor scenarios. Results show that our stability-reasoning framework improves RGB-D segmentation and scene volumetric representation.
From
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013
Link