Wednesday, April 20, 2011

NTU PAL Thesis Defense: Mobile Robot Localization in Large-scale Dynamic Environments

Mobile Robot Localization in Large-scale Dynamic Environments

Shao-Wen Yang
Doctoral Dissertation Defense
Department of Computer Science and Information Engineering
National Taiwan University

Time: Thursday, 19 May, 2011 at 8:00AM +0800 (CST)
Location: R542, Der-Tian Hall

Advisor: Chieh-Chih Wang

Thesis Committee:

Li-Chen Fu
Jane Yung-Jen Hsu
Han-Pang Huang
Ta-Te Lin
Chu-Song Chen, Sinica
Jwu-Sheng Hu, NCTU
John J. Leonard, MIT


Localization is the most fundamental problem to providing a mobile robot with autonomous capabilities. Whilst simultaneous localization and mapping (SLAM) and moving object tracking (MOT) have attracted immense attention in the last decade, the focus of robotics continues to shift from stationary robots in a factory automation environment to mobile robots operating in human-inhabited environments. State of the art relying on the static world assumption can fail in the real environment that is typically dynamic. Specifically, the real environment is challenging for mobile robots due to the variety of perceptual inconsistency over space and time. Development of situational awareness is particularly important so that the mobile robots can adapt quickly to changes in the environment.

In this thesis, we explore the problem of mobile robot localization in the real world in theory and practice, and show that localization can benefit from both stationary and dynamic entities.

The performance of ego-motion estimation depends on the consistency between sensory information at successive time steps, whereas the performance of localization relies on the consistency between the sensory information and the a priori map. The inconsistencies make a robot unable to robustly determine its location in the environment. We show that mobile robot localization, as well as ego-motion estimation, and moving object detection are mutually beneficial. Most importantly, addressing the inconsistencies serves as the basis for mobile robot localization, and forms a solid bridge between SLAM and MOT.

Localization, as well as moving object detection, is not only challenging but also difficult to evaluate quantitatively due to the lack of a realistic ground truth. As the key competencies for mobile robotic systems are localization and semantic context interpretation, an annotated data set, as well as an interactive annotation tool, is released to facilitate the development, evaluation and comparison of algorithms for localization, mapping, moving object detection, moving object tracking, etc.

In summary, a unified stochastic framework is introduced to solve the problems of motion estimation and motion segmentation simultaneously in highly dynamic environments in real time. A dual-model localization framework that uses information from both the static scene and dynamic entities is proposed to improve the localization performance by explicitly incorporating, rather than filtering out, moving object information. In the ample experiment, a sub-meter accuracy is achieved, without the aid of GPS, which is adequate for autonomous navigation in crowded urban scenes. The empirical results suggest that the performance of localization can be improved when handling the changing environment explicitly.


Sunday, April 17, 2011

Lab Meeting April 20, 2011 (fish60): Donut as I do: Learning from failed demonstrations

Title: Donut as I do: Learning from failed demonstrations In: 2011 IEEE International Conference on Robotics and Automation Authors: Grollman, Daniel (Ecole Polytechnique Federale de Lausanne), Billard, Aude (EPFL) Abstract The canonical Robot Learning from Demonstration scenario has a robot observing human demonstrations of a task or behavior in a few situations, and then developing a generalized controller. ... However, the underlying assumption is that the demonstrations are successful, and are appropriate to reproduce. We, instead, consider the possibility that the human has failed in their attempt, and their demonstration is an example of what not to do. Thus, instead of maximizing the similarity of generated behaviors to those of the demonstrators, we examine two methods that deliberately avoid repeating the human's mistakes. Link

Tuesday, April 12, 2011

Lab Meeting April 13, 2011 (Will): Hilbert Space Embeddings of Hidden Markov Models (ICML2010)

Titile: Hilbert Space Embeddings of Hidden Markov Model
In: ICML 2010
Authors: Le Song, Byron Boots, Sajid Siddiqi, Geoffrey Gordon, Alex Smola
Hidden Markov Models (HMMs) are important tools for modeling sequence data. However, they are restricted to discrete latent states, and are largely restricted to Gaussian and discrete observations. And, learning algorithms for HMMs have predominantly relied on local search heuristics, with the exception of spectral methods such as those described below. We propose a nonparametric HMM that extends traditional HMMs to structured and non-Gaussian continuous distributions. Furthermore, we derive a local-minimum-free kernel spectral algorithm for learning these HMMs. We apply our method to robot vision data, slot car inertial sensor data and audio event classification data, and show that in these applications, embedded HMMs exceed the previous state-of-the-art performance.


Lab Meeting April 13, 2011 (Jimmy): WiFi-SLAM Using Gaussian Process Latent Variable Models (IJCAI2007)

Title: WiFi-SLAM Using Gaussian Process Latent Variable Models
In: IJCAI 2007
Authors: Brian Ferris, Dieter Fox, and Neil Lawrence

WiFi localization, the task of determining the physical location of a mobile device from wireless signal strengths, has been shown to be an accurate method of indoor and outdoor localization and a powerful building block for location-aware applications. However, most localization techniques require a training set of signal strength readings labeled against a ground truth location map, which is prohibitive to collect and maintain as maps grow large. In this paper we propose a novel technique for solving the WiFi SLAM problem using the Gaussian Process Latent Variable Model (GPLVM) to determine the latent-space locations of unlabeled signal strength data. We show how GPLVM, in combination with an appropriate motion dynamics model, can be used to reconstruct a topological connectivity graph from a signal strength sequence which, in combination with the learned Gaussian Process signal strength model, can be used to perform efficient localization.