Title: Belief Propagation Based Localization and Mapping Using Sparsely Sampled GNSS SNR Measurements
In: ICRA 2014
Authors: Andrew T. Irish, Jason T. Isaacs, Francois Quitin, Joao P. Hespanha, and Upamanyu Madhow
Abstract
A novel approach is proposed to achieve simultaneous localization and mapping (SLAM) based on the signal-tonoise ratio (SNR) of global navigation satellite system (GNSS) signals. It is assumed that the environment is unknown and that the receiver location measurements (provided by a GNSS receiver) are noisy. The 3D environment map is decomposed into a grid of binary-state cells (occupancy grid) and the receiver locations are approximated by sets of particles. Using a large number of sparsely sampled GNSS SNR measurements and receiver/satellite coordinates (all available from off-the-shelf GNSS receivers), likelihoods of blockage are associated with every receiver-to-satellite beam. The posterior distribution of the map and poses is shown to represent a factor graph, on which Loopy Belief Propagation is used to efficiently estimate the probabilities of each cell being occupied or empty, along with the probability of the particles for each receiver location. Experimental results demonstrate our algorithm’s ability to
coarsely map (in three dimensions) a corner of a university campus, while also correcting for uncertainties in the location of the GNSS receiver.
Link
This Blog is maintained by the Robot Perception and Learning lab at CSIE, NTU, Taiwan. Our scientific interests are driven by the desire to build intelligent robots and computers, which are capable of servicing people more efficiently than equivalent manned systems in a wide variety of dynamic and unstructured environments.
Wednesday, June 25, 2014
Tuesday, June 17, 2014
Lab Meeting, Jun 19, 2014 (Jim): Bayesian Exploration and Interactive Demonstration in Continuous State MAXQ-Learning
Title:
Bayesian Exploration and Interactive Demonstration in Continuous State MAXQ-Learning
IEEE International Conference on Robotics and Automation, May, 2014.
Author:
Kathrin Grรคve and Sven Behnke
Abstract:
... Inspired by the way humans decompose complex tasks, hierarchical methods for robot learning have attracted significant interest. In this paper, we apply the MAXQ method for hierarchical reinforcement learning to continuous state spaces. By using Gaussian Process Regression for MAXQ value function decomposition, we obtain probabilistic estimates of primitive and completion values for every subtask within the MAXQ hierarchy. ... Based on the expected deviation of these estimates, we devise a Bayesian exploration strategy that balances optimization of expected values and risk from exploring unknown actions. To further reduce risk and to accelerate learning, we complement MAXQ with learning from demonstrations in an interactive way. In every situation and subtask, the system may ask for a demonstration if there is not enough knowledge available to determine a safe action for exploration. We demonstrate the ability of the proposed system to efficiently learn solutions to complex tasks on a box stacking scenario.
Link
Bayesian Exploration and Interactive Demonstration in Continuous State MAXQ-Learning
IEEE International Conference on Robotics and Automation, May, 2014.
Author:
Kathrin Grรคve and Sven Behnke
Abstract:
... Inspired by the way humans decompose complex tasks, hierarchical methods for robot learning have attracted significant interest. In this paper, we apply the MAXQ method for hierarchical reinforcement learning to continuous state spaces. By using Gaussian Process Regression for MAXQ value function decomposition, we obtain probabilistic estimates of primitive and completion values for every subtask within the MAXQ hierarchy. ... Based on the expected deviation of these estimates, we devise a Bayesian exploration strategy that balances optimization of expected values and risk from exploring unknown actions. To further reduce risk and to accelerate learning, we complement MAXQ with learning from demonstrations in an interactive way. In every situation and subtask, the system may ask for a demonstration if there is not enough knowledge available to determine a safe action for exploration. We demonstrate the ability of the proposed system to efficiently learn solutions to complex tasks on a box stacking scenario.
Link
Wednesday, June 11, 2014
Lab Meeting, Jun 12, 2014 (Zhi-qiang): Simon Hadfield, Member, IEEE; Richard Bowden, Senior Member, IEEE. "Scene Particles: Unregularized Particle Based Scene Flow Estimation" IEEE TRANSACTIONS PATTERN ANALYSIS AND MACHINE INTELLIGENCE (PAMI), 2014
Title:
Scene Particles: Unregularized Particle Based Scene Flow Estimation
Author:
Simon Hadfield; Richard Bowden
IEEE Pattern Analysis and Machine Intelligence (PAMI), 2014
Link: http://personal.ee.surrey.ac.uk/Personal/S.Hadfield/papers/Scene%20particles.pdf
Scene Particles: Unregularized Particle Based Scene Flow Estimation
Author:
Simon Hadfield; Richard Bowden
Abstract
In this paper, an
algorithm is presented for estimating scene flow, which is a richer, 3D
analogue of Optical Flow. The approach operates orders of magnitude
faster than alternative techniques, and is well suited to further
performance gains through parallelized implementation. The algorithm
employs multiple hypothesis to deal with motion ambiguities, rather than
the traditional smoothness constraints, removing oversmoothing errors
and providing significant performance improvements on benchmark data,
over the previous state of the art. The approach is flexible, and
capable of operating with any combination of appearance and/or depth
sensors, in any setup, simultaneously estimating the structure and
motion if necessary. Additionally, the algorithm propagates information
over time to resolve ambiguities, rather than performing an isolated
estimation at each frame, as in contemporary approaches. Approaches to
smoothing the motion field without sacrificing the benefits of multiple
hypotheses are explored, and a probabilistic approach to Occlusion
estimation is demonstrated, leading to 10% and 15% improved performance
respectively. Finally, a data driven tracking approach is described, and
used to estimate the 3D trajectories of hands during sign language,
without the need to model complex appearance variations at each
viewpoint.
From:
From:
Link: http://personal.ee.surrey.ac.uk/Personal/S.Hadfield/papers/Scene%20particles.pdf
Wednesday, June 04, 2014
Lab meeting Jun 5, 2014 (Hung-Chih Lu): Robust Online Multi-Object Tracking based on Tracklet Confidence and Online Discriminative Appearance Learning
Title: Robust Online Multi-Object Tracking based on Tracklet Confidence and Online Discriminative Appearance Learning
Authors: Seung-Hwan Bae and Kuk-Jin Yoon
Abstract:
Online multi-object tracking aims at producing complete tracks of multiple objects using the information accumulated up to the present moment. It still remains a difficult problem in complex scenes, because of frequent occlusion by clutter or other objects, similar appearances of different objects, and other factors. In this paper, we propose a robust online multi-object tracking method that can handle
these difficulties effectively.
We first propose the tracklet confidence using the detectability and continuity of a tracklet, and formulate a
multi-object tracking problem based on the tracklet confidence.The multi-object tracking problem is then solved by associating tracklets in different ways according to their confidence values. Based on this strategy, tracklets sequentially grow with online-provided detections, and fragmented tracklets are linked up with others without any iterative and expensive associations. Here, for reliable association between tracklets and detections, we also propose a novel online learning method using an incremental linear discriminant
analysis for discriminating the appearances of objects. By exploiting the proposed learning method, tracklet association can be successfully achieved even under severe occlusion. Experiments with challenging public datasets show distinct performance improvement over other batch and online tracking methods.
CVPR 2014
Link
Authors: Seung-Hwan Bae and Kuk-Jin Yoon
Abstract:
Online multi-object tracking aims at producing complete tracks of multiple objects using the information accumulated up to the present moment. It still remains a difficult problem in complex scenes, because of frequent occlusion by clutter or other objects, similar appearances of different objects, and other factors. In this paper, we propose a robust online multi-object tracking method that can handle
these difficulties effectively.
We first propose the tracklet confidence using the detectability and continuity of a tracklet, and formulate a
multi-object tracking problem based on the tracklet confidence.The multi-object tracking problem is then solved by associating tracklets in different ways according to their confidence values. Based on this strategy, tracklets sequentially grow with online-provided detections, and fragmented tracklets are linked up with others without any iterative and expensive associations. Here, for reliable association between tracklets and detections, we also propose a novel online learning method using an incremental linear discriminant
analysis for discriminating the appearances of objects. By exploiting the proposed learning method, tracklet association can be successfully achieved even under severe occlusion. Experiments with challenging public datasets show distinct performance improvement over other batch and online tracking methods.
CVPR 2014
Link
Subscribe to:
Posts (Atom)