Sunday, November 28, 2010

Lab Meeting November 29, 2010 (Wang Li): Adaptive Pose Priors for Pictorial Structures (CVPR 2010)

Adaptive Pose Priors for Pictorial Structures

Benjamin Sapp
Chris Jordan
Ben Taskar


The structure and parameterization of a pictorial structure model is often restricted by assuming tree dependency structure and unimodal, data-independent pairwise interactions, which fail to capture important patterns in the data. On the other hand, local methods such as kernel density estimation provide nonparametric flexibility but require large amounts of data to generalize well. We propose a simple semi-parametric approach that combines the tractability of pictorial structure inference with the flexibility of non-parametric methods by expressing a subset of model parameters as kernel regression estimates from a learned sparse set of exemplars. This yields query-specific, image-dependent pose priors. We develop an effective shape-based kernel for upper-body pose similarity and propose a leave-one-out loss function for learning a sparse subset of exemplars for kernel regression. We apply our techniques to two challenging datasets of human figure parsing and advance the state-of-the-art (from 80% to 86% on the Buffy dataset), while using only 15% of the training data as exemplars.

Paper Link

Saturday, November 27, 2010

Lab Meeting November 29th, 2010 (Jeff): Sub-Meter Indoor Localization in Unmodified Environments with Inexpensive Sensors

Title: Sub-Meter Indoor Localization in Unmodified Environments with Inexpensive Sensors

Authors: Morgan Quigley, David Stavens, Adam Coates, and Sebastian Thrun


The interpretation of uncertain sensor streams for localization is usually considered in the context of a robot. Increasingly, however, portable consumer electronic devices, such as smartphones, are equipped with sensors including WiFi radios, cameras, and inertial measurement units (IMUs). Many tasks typically associated with robots, such as localization, would be valuable to perform on such devices. In this paper, we present an approach for indoor localization exclusively using the low-cost sensors typically found on smartphones. Environment modification is not needed. We rigorously evaluate our method using ground truth acquired using a laser range scanner. Our evaluation includes overall accuracy and a comparison of the contribution of individual sensors. We find experimentally that fusion of multiple sensor modalities is necessary for optimal performance and demonstrate sub-meter localization accuracy.

IEEE/RSJ International Conference on Intelligent Robots and Systems(IROS), October 2010


Monday, November 22, 2010

Lab Meeting November 22, 2010 (Andi): Three-Dimensional Mapping with Time-of-Flight Cameras

Title: Three-Dimensional Mapping with Time-of-Flight Cameras

Authors: Stefan May, David Droeschel, Dirk Holz, Stefan Fuchs, Ezio Malis, Andreas Nuechter and Joachim Hertzberg

Journal of Field Robotics 2009

Abstract: This article investigates the use of time-of-flight (ToF) cameras in mapping tasks for autonomous mobile robots, in particular in simultaneous localization and mapping (SLAM) tasks. Although ToF cameras are in principle an attractive type of sensor for threedimensional (3D) mapping owing to their high rate of frames of 3D data, two features make them difficult as mapping sensors, namely, their restricted field of view and influences on the quality of range measurements by high dynamics in object reflectivity; in addition, currently available models suffer from poor data quality in a number of aspects. The paper first summarizes calibration and filtering approaches for improving the accuracy, precision, and robustness of ToF cameras independent of their intended usage. Then, several ego motion estimation approaches are applied or adapted, respectively, in order to provide a performance benchmark for registering ToF camera data. As a part of this, an extension to the iterative closest point algorithm has been developed that increases the robustness under restricted field of view and under larger displacements. Using an indoor environment, the paper provides results from SLAM experiments using these approaches in comparison. It turns out that the application of ToF cameras is feasible to SLAM tasks, although this type of sensor has a complex error characteristic.

Sunday, November 21, 2010

Lab Meeting November 22, 2010 (Alan): Temporary Maps for Robust Localization in Semi-static Environments (IROS 2010)

Title: Temporary Maps for Robust Localization in Semi-static Environments (IROS 2010)
Authors: Daniel Meyer-Delius, Jurgen Hess, Giorgio Grisetti, Wolfram Burgard

Abstract—Accurate and robust localization is essential for the successful navigation of autonomous mobile robots. The majority of existing localization approaches, however, is based on the assumption that the environment is static which does not hold for most practical application domains. In this paper, we present a localization framework that can robustly track a robot’s pose even in non-static environments. Our approach keeps track of the observations caused by unexpected objects in the environment using temporary local maps. It relies both on these temporary local maps and on a reference map of the environment for estimating the pose of the robot. Experimental results demonstrate that by exploiting the observations caused by unexpected objects our approach outperforms standard localization methods for static environments.

Link: pdf

Monday, November 15, 2010

Lab Meeting November 15( KuenHan ), 3D Reconstruction of a Moving Point from a Series of 2D Projections (ECCV 2010)

Title :3D Reconstruction of a Moving Point from a Series of 2D Projections
Author: Hyun Soo Park, Takaaki Shiratori, Iain Matthews, and Yaser Sheikh


This paper presents a linear solution for reconstructing the 3D trajectory of a moving point from its correspondence in a collection of 2D perspective images, given the 3D spatial pose and time of capture of the cameras that produced each image. Triangulation-based solutions do not apply, as multiple views of the point may not exist at each instant in time. A geometric analysis of the problem is presented and a criterion, called reconstructibility, is defined to precisely characterize the cases when reconstruction is possible, and how accurate it can be. We apply the linear reconstruction algorithm to reconstruct the time evolving 3D structure of several real-world scenes, given a collection of non-coincidental 2D images.


Sunday, November 14, 2010

Lab Meeting November 15, 2010 (fish60): Unfreezing the Robot: Navigation in Dense, Interacting Crowds

Title: Unfreezing the Robot: Navigation in Dense, Interacting Crowds(IROS 2010)
Author: Peter Trautman and Andreas Krause

Abstract—In this paper, we study the safe navigation of a mobile robot through crowds of dynamic agents with uncertain trajectories. Existing algorithms suffer from the “freezing robot” problem: once the environment surpasses a certain level of complexity, the planner decides that all forward paths are unsafe, and the robot freezes in place (or performs unnecessary aneuvers) to avoid collisions. ... In this work, we demonstrate that both the individual prediction and the predictive uncertainty have little to do with the frozen robot problem. Our key insight is that dynamic agents solve the frozen robot problem by engaging in “joint collision avoidance”: They cooperatively make room to create feasible trajectories. We develop IGP, a nonparametric statistical model based on Dependent Output Gaussian Processes that can estimate crowd interaction from data. Our model naturally captures the non-Markov nature of agent trajectories, as well as their goal-driven navigation. We then show how planning in this model can be efficiently implemented using particle based inference.


Monday, November 01, 2010

CMU PhD Thesis Defense: Geolocation with Range: Robustness, Efficiency and Scalability

CMU RI PhD Thesis Defense
Joseph A. Djugash
Geolocation with Range: Robustness, Efficiency and Scalability
November 05, 2010, 10:00 a.m., NSH 1507


This thesis explores the topic of geolocation with range. A robust method for localization and SLAM (Simultaneous Localization and Mapping) is proposed. This method uses a polar parameterization of the state to achieve accurate estimates of the nonlinear and multi-modal distributions in range-only systems. Several experimental evaluations on real robots reveal the reliability of this method.

Scaling such a system to large network of nodes, increases the computational load on the system due to the increased state vector. To alleviate this problem, we propose the use of a distributed estimation algorithm based on the belief propagation framework. This method distributes the estimation task, such that each node only estimates its local network, greatly reducing the computation performed by any individual node. However, the method does not provide any guarantees on the convergence of its solution in general graphs. Convergence is only guaranteed for non-cyclic graphs (ie. trees). Thus, an extension of this approach which reduces any arbitrary graph to a spanning tree is presented. This enables the proposed decentralized localization method to provide guarantees on its convergence.

Scaling in the traditional sense involves extensions to deal with growth in the size of the operating environment. In large, feature-less environments, maintaining a globally consistent estimate of a group of mobile agents is difficult. In this thesis, a novel multi-robot coordination strategy is proposed. Based on the observability analysis of the system, the propose controller achieves the tight coordination necessary to obtain an accurate global estimate. The proposed approach is demonstrated using both simulation and experimental testing with real robots.


Thesis Committee
Sanjiv Singh, Chair
George Kantor
Howie Choset
Wolfram Burgard, University of Freiburg