Sunday, August 31, 2008
Authors: Derek Hoiem, Alexei A. Efros, and Martial Hebert
IJCV, Vol. 75, No. 1, October 2007
Many computer vision algorithms limit their performance by ignoring the underlying 3D geometric structure in the image. We show that we can estimate the coarse geometric properties of a scene by learning appearance-based models of geometric classes, even in cluttered natural scenes. Geometric classes describe the 3D orientation of an image region with respect to the camera. We provide a multiple hypothesis framework for robustly estimating scene structure from a single image and obtaining confidences for each geometric label. These confidences can then be used to improve the performance of many other applications. We provide a thorough quantitative evaluation of our algorithm on a set of outdoor images and demonstrate its usefulness in two applications: object detection and automatic singleview reconstruction.
Full text: pdf
Journal version: pdf
Sunday, August 17, 2008
Authors: Albert Huang, David Moore, Matthew Antone, Edwin Olson, Seth Teller
Abstract: This paper describes a perception-based system for detecting and estimating the properties of multiple travel lanes in an urban road network from calibrated video imagery and laser range data acquired by a moving vehicle. The system operates in several stages on multiple processors, fusing detected road markings, obstacles, and curbs into a stable non-parametric estimate of nearby travel lanes. The system incorporates elements of a provided piecewise-linear road network as a weak prior. Our method is notable in several respects: it fuses asynchronous, heterogenous sensor streams; it distributes naturally across several CPUs communicating only through message-passing; it handles high-curvature roads; and it makes no assumption about the position or orientation of the vehicle with respect to the travel lane. We analyze the system's performance in the context of the 2007 DARPA Urban Challenge, where with five cameras and thirteen lidars it was incorporated into a closed-loop controller to successfully guide an autonomous vehicle through a 90~km urban course at speeds up to 40 km/h.
read the full paper here
Arthur: Bertrand Douillard, Dieter Fox, Fabio Ramos
Generating rich representations of environments is a fundamental task in mobile robotics. In this paper we introduce a novel approach to building object type maps of outdoor environments. Our approach uses conditional random ﬁelds (CRF) to jointly classify the laser returns in a 2D scan map into seven object types (car, wall, tree trunk, foliage,
person, grass, and other). The spatial connectivity of the CRF is determined via Delaunay triangulation of the laser map. Our model incorporates laser shape features, visual appearance features, visual object detectors trained on existing image data sets and structural information extracted from clusters of laser returns. The parameters of the CRF are trained from partially labeled laser and camera data collected by a car moving through
an urban environment. Our approach achieves 77% accuracy in classifying the object types observed along a 750 meters long test trajectory.
robots will have to deal with increasingly unpredictable and
variable environments. We present a system that is able to
recognize objects of a certain class in an image and to identify
their parts for potential interactions. This is demonstrated for
object instances that have never been observed during training,
and under partial occlusion and against cluttered backgrounds.
Our approach builds on the Implicit Shape Model of Leibe and
Schiele, and extends it to couple recognition to the provision of
meta-data useful for a task. Meta-data can for example consist of
part labels or depth estimates. We present experimental results
on wheelchairs and cars.
RSS Online Proceedings: here
Saturday, August 16, 2008
Friday, August 15, 2008
Saturday, August 09, 2008
Wednesday, August 06, 2008
Bradley Hamner, Sanjiv Singh,and Sebastian Scherer
This paper concerns an outdoor mobile robot that learns to avoid collisions by observing a human driver operate a vehicle equipped with sensors that continuously produce a map of the local environment.
Here we present the formulation for this control system and its independent parameters and then show how these parameters can be automatically estimated by observing a human driver. We also present results from operation on an autonomous robot as well as in simulation, and compare the results from our method to another commonly used learning method.Link
Tuesday, August 05, 2008
Lab Meeting August 11, 2008 (Any): Model Based Vehicle Tracking for Autonomous Driving in Urban Environments
RSS Online Proceedings: here
Monday, August 04, 2008
Authors：David Gallup, Jan-Michael Frahm, Philippos Mordohai, Marc Pollefeys
We present a novel multi-baseline, multi-resolution stereo method, which varies the baseline and resolution proportionally to depth to obtain a reconstruction in which the depth error is constant. This is in contrast to traditional stereo, in which the error grows quadratically with depth, which means that the accuracy in the near range far exceeds that of the far range. This accuracy in the near range is unnecessarily high and comes at signiﬁcant computational cost. It is, however, non-trivial to reduce this without also reducing the accuracy in the far range. Many datasets, such as video captured from a moving camera, allow the baseline to be selected with signiﬁcant ﬂexibility. By selecting an appropriate baseline and resolution (realized using an image pyramid), our algorithm computes a depthmap which has these properties: 1) the depth accuracy is constant over the reconstructed volume, 2) the computational effort is spread evenly over the volume, 3) the angle of triangulation is held constant w.r.t. depth. Our approach achieves a given target accuracy with minimal computational effort, and is orders of magnitude faster than traditional stereo.
Sunday, August 03, 2008
Lab Meeting August 4th, 2008 (Yu-Chun): Robots in Organizations: The Role of Workflow, Social, and Environmental Factors in Human-Robot Interaction
Authors: Bilge Mutlu and Jodi Forlizzi
HRI 2008 Best Conference Paper [PDF]
Robots are becoming increasingly integrated into the workplace, impacting organizational structures and processes, and affecting products and services created by these organizations. While robots promise significant benefits to organizations, their introduction poses a variety of design challenges. In this paper, we use ethnographic data collected at a hospital using an autonomous delivery robot to examine how organizational factors affect the way its members respond to robots and the changes engendered by their use. Our analysis uncovered dramatic differences between the medical and post-partum units in how people integrated the robot into their workflow and their perceptions of and interactions with it. Different patient profiles in these units led to differences in workflow, goals, social dynamics, and the use of the physical environment. In medical units, low tolerance for interruptions, a discrepancy between the perceived cost and benefits of using the robot, and breakdowns due to high traffic and clutter in the robot's path caused the robot to have a negative impact on the workflow and staff resistance. On the contrary, post-partum units integrated the robot into their workflow and social context. Based on our findings, we provide design guidelines for the development of robots for organizations.