Sunday, August 31, 2008

Lab Meeting September 1, 2008 (Jimmy): Recovering Surface Layout from an Image

Title: Recovering Surface Layout from an Image
Authors: Derek Hoiem, Alexei A. Efros, and Martial Hebert
IJCV, Vol. 75, No. 1, October 2007

Many computer vision algorithms limit their performance by ignoring the underlying 3D geometric structure in the image. We show that we can estimate the coarse geometric properties of a scene by learning appearance-based models of geometric classes, even in cluttered natural scenes. Geometric classes describe the 3D orientation of an image region with respect to the camera. We provide a multiple hypothesis framework for robustly estimating scene structure from a single image and obtaining confidences for each geometric label. These confidences can then be used to improve the performance of many other applications. We provide a thorough quantitative evaluation of our algorithm on a set of outdoor images and demonstrate its usefulness in two applications: object detection and automatic singleview reconstruction.

Full text: pdf
Journal version: pdf

Sunday, August 17, 2008

Lab Meeting August 18, 2008 (Andi): Multi-Sensor Lane Finding in Urban Road Networks

Title: Multi-Sensor Lane Finding in Urban Road Networks

Albert Huang, David Moore, Matthew Antone, Edwin Olson, Seth Teller

Abstract: This paper describes a perception-based system for detecting and estimating the properties of multiple travel lanes in an urban road network from calibrated video imagery and laser range data acquired by a moving vehicle. The system operates in several stages on multiple processors, fusing detected road markings, obstacles, and curbs into a stable non-parametric estimate of nearby travel lanes. The system incorporates elements of a provided piecewise-linear road network as a weak prior. Our method is notable in several respects: it fuses asynchronous, heterogenous sensor streams; it distributes naturally across several CPUs communicating only through message-passing; it handles high-curvature roads; and it makes no assumption about the position or orientation of the vehicle with respect to the travel lane. We analyze the system's performance in the context of the 2007 DARPA Urban Challenge, where with five cameras and thirteen lidars it was incorporated into a closed-loop controller to successfully guide an autonomous vehicle through a 90~km urban course at speeds up to 40 km/h.

read the full paper here

Lab Meeting August 18, 2008 (Atwood): Laser and Vision Based Outdoor Object Mapping

Title: Laser and Vision Based Outdoor Object Mapping

Arthur: Bertrand Douillard, Dieter Fox, Fabio Ramos

Generating rich representations of environments is a fundamental task in mobile robotics. In this paper we introduce a novel approach to building object type maps of outdoor environments. Our approach uses conditional random fields (CRF) to jointly classify the laser returns in a 2D scan map into seven object types (car, wall, tree trunk, foliage,
person, grass, and other). The spatial connectivity of the CRF is determined via Delaunay triangulation of the laser map. Our model incorporates laser shape features, visual appearance features, visual object detectors trained on existing image data sets and structural information extracted from clusters of laser returns. The parameters of the CRF are trained from partially labeled laser and camera data collected by a car moving through
an urban environment. Our approach achieves 77% accuracy in classifying the object types observed along a 750 meters long test trajectory.


Lab Meeting August 18, 2008 (Chuan-Heng): Using Recognition to Guide a Robot's Attention

Title:Using Recognition to Guide a Robot's Attention

Authors: Alexander Thomas, Vittorio Ferrari, Bastian Leibe, Tinne Tuytelaars and Luc Van Gool

Abstract: In the transition from industrial to service robotics,
robots will have to deal with increasingly unpredictable and
variable environments. We present a system that is able to
recognize objects of a certain class in an image and to identify
their parts for potential interactions. This is demonstrated for
object instances that have never been observed during training,
and under partial occlusion and against cluttered backgrounds.
Our approach builds on the Implicit Shape Model of Leibe and
Schiele, and extends it to couple recognition to the provision of
meta-data useful for a task. Meta-data can for example consist of
part labels or depth estimates. We present experimental results
on wheelchairs and cars.

RSS Online Proceedings: here
Abstract: here
PDF: here

Saturday, August 16, 2008

Lab Meeting August 17, 2008 (Bob): CVPR 2008 Summary (II)

I will keep summarizing CVPR 2008. If time permitted, the CMU motion planning system in urban environments will be discussed.


Friday, August 15, 2008

Robot News

Rise of the rat-brained robots. [link]

Rubbery conductor promises robots a stretchy skin. [Link] [Journal reference: Science (DOI:10.1126/science.1160309)]

Saturday, August 09, 2008

Lab Meeting August 11, 2008 (Bob): CVPR 2008 Summary (I)

I will summarize the CVPR 2008 papers at the lab meeting.



Wednesday, August 06, 2008

Learning Obstacle Avoidance Parameters from Operator Behavior

Bradley Hamner, Sanjiv Singh,and Sebastian Scherer

This paper concerns an outdoor mobile robot that learns to avoid collisions by observing a human driver operate a vehicle equipped with sensors that continuously produce a map of the local environment.

Here we present the formulation for this control system and its independent parameters and then show how these parameters can be automatically estimated by observing a human driver. We also present results from operation on an autonomous robot as well as in simulation, and compare the results from our method to another commonly used learning method.


Tuesday, August 05, 2008

Lab Meeting August 11, 2008 (Any): Model Based Vehicle Tracking for Autonomous Driving in Urban Environments

Title: Model Based Vehicle Tracking for Autonomous Driving in Urban Environments

Authors: Anna Petrovskaya and Sebastian Thrun

Abstract: Situational awareness is crucial for autonomous driving in urban environments. This paper describes moving vehicle tracking module that we developed for our successful entry in the Urban Grand Challenge, an autonomous driving race organized by the U.S. Government in 2007. The module provides reliable tracking of moving vehicles from a high-speed moving platform using laser range finders. Our approach models both dynamic and geometric properties of the tracked vehicles and estimates them using a single Bayes filter. We also show how to build efficient 2D representations out of 3D range data and how to detect poorly visible black vehicles.

In contrast to prior art, we propose a model based approach which encompasses both geometric and dynamic properties of the tracked vehicle in a single Bayes filter. The approach naturally handles data segmentation and association, so that these pre-processing steps are not required.

RSS Online Proceedings: here
Abstract: here
PDF: here

Monday, August 04, 2008

Lab Meeting August 11, 2008(ZhenYu):Variable Baseline/Resolution Stereo

Title:Variable Baseline/Resolution Stereo(CVPR08)

David Gallup, Jan-Michael Frahm, Philippos Mordohai, Marc Pollefeys


We present a novel multi-baseline, multi-resolution stereo method, which varies the baseline and resolution proportionally to depth to obtain a reconstruction in which the depth error is constant. This is in contrast to traditional stereo, in which the error grows quadratically with depth, which means that the accuracy in the near range far exceeds that of the far range. This accuracy in the near range is unnecessarily high and comes at significant computational cost. It is, however, non-trivial to reduce this without also reducing the accuracy in the far range. Many datasets, such as video captured from a moving camera, allow the baseline to be selected with significant flexibility. By selecting an appropriate baseline and resolution (realized using an image pyramid), our algorithm computes a depthmap which has these properties: 1) the depth accuracy is constant over the reconstructed volume, 2) the computational effort is spread evenly over the volume, 3) the angle of triangulation is held constant w.r.t. depth. Our approach achieves a given target accuracy with minimal computational effort, and is orders of magnitude faster than traditional stereo.


Sunday, August 03, 2008

Lab Meeting August 4th, 2008 (Yu-Chun): Robots in Organizations: The Role of Workflow, Social, and Environmental Factors in Human-Robot Interaction

Robots in Organizations: The Role of Workflow, Social, and Environmental Factors in Human-Robot Interaction

Authors: Bilge Mutlu and Jodi Forlizzi

HRI 2008 Best Conference Paper [PDF]

Robots are becoming increasingly integrated into the workplace, impacting organizational structures and processes, and affecting products and services created by these organizations. While robots promise significant benefits to organizations, their introduction poses a variety of design challenges. In this paper, we use ethnographic data collected at a hospital using an autonomous delivery robot to examine how organizational factors affect the way its members respond to robots and the changes engendered by their use. Our analysis uncovered dramatic differences between the medical and post-partum units in how people integrated the robot into their workflow and their perceptions of and interactions with it. Different patient profiles in these units led to differences in workflow, goals, social dynamics, and the use of the physical environment. In medical units, low tolerance for interruptions, a discrepancy between the perceived cost and benefits of using the robot, and breakdowns due to high traffic and clutter in the robot's path caused the robot to have a negative impact on the workflow and staff resistance. On the contrary, post-partum units integrated the robot into their workflow and social context. Based on our findings, we provide design guidelines for the development of robots for organizations.