Sunday, December 26, 2010

Lab Meeting January 3rd, 2011(David) :Vision-Based Behavior Prediction in Urban Traffic Environments by Scene Categorization (BMVC 2010)

Title: Vision-Based Behavior Prediction in Urban Traffic Environments by Scene Categorization (BMVC 2010)

Authors: Martin Heracles, Fernando Martinelli and Jannik Fritsch

We propose a method for vision-based scene understanding in urban traffic environments that predicts the appropriate behavior of a human driver in a given visual scene. The method relies on a decomposition of the visual scene into its constituent objects by image segmentation and uses segmentation-based features that represent both their identity and spatial properties. We show how the behavior prediction can be naturally formulated as scene categorization problem and how ground truth behavior data for learning a classifier can be automatically generated from any monocular video sequence recorded from a moving vehicle, using structure from motion techniques. We evaluate our method both quantitatively and qualitatively on the recently proposed CamVid dataset, predicting the appropriate velocity and yaw rate of the car as well as their appropriate change for both day and dusk sequences. In particular, we investigate the impact of the underlying segmentation and the number of behavior classes on the quality of these predictions


Wednesday, December 22, 2010

Lab Meeting December 27, 2010(Chih Chung) : Lozano-Perez. Belief space planning assuming maximum likelihood observations.(RSS 2010)

Title:Belief space planning assuming maximum likelihood observations

Authors:Robert Platt Jr., Russ Tedrake, Leslie Kaelbling, Tomas Lozano-Perez

We cast the partially observable control problem as
a fully observable underactuated stochastic control problem in
belief space and apply standard planning and control techniques.
One of the difficulties of belief space planning is modeling the
stochastic dynamics resulting from unknown future observations.
The core of our proposal is to define deterministic beliefsystem
dynamics based on an assumption that the maximum
likelihood observation (calculated just prior to the observation)
is always obtained. The stochastic effects of future observations
are modelled as Gaussian noise. Given this model of the dynamics,
two planning and control methods are applied. In the first, linear
quadratic regulation (LQR) is applied to generate policies in the
belief space. This approach is shown to be optimal for linear-
Gaussian systems. In the second, a planner is used to find locally
optimal plans in the belief space. We propose a replanning
approach that is shown to converge to the belief space goal
in a finite number of replanning steps. These approaches are
characterized in the context of a simple nonlinear manipulation
problem where a planar robot simultaneously locates and grasps
an object.


Sunday, December 19, 2010

Lab Meeting December 20, 2010(Chung-Han): progress report

I will report my progress on ground-truth annotation.

Sunday, December 12, 2010

Lab Meeting December 13, 2010(ShaoChen): DDF-SAM: Fully Distributed SLAM using Constrained Factor Graphs(IROS2010)

Title: DDF-SAM: Fully Distributed SLAM using Constrained Factor Graphs

Authors: Alexander Cunningham, Manohar Paluri, and Frank Dellaert


We address the problem of multi-robot distributed SLAM with an extended Smoothing and Mapping (SAM) approach to implement Decentralized Data Fusion (DDF). We present DDF-SAM, a novel method for efficiently and robustly distributing map information across a team of robots, to achieve scalability in computational cost and in communication bandwidth and robustness to node failure and to changes in network topology. DDF-SAM consists of three modules: (1) a local optimization module to execute single-robot SAM and condense the local graph; (2) a communication module to collect and propagate condensed local graphs to other robots, and (3) a neighborhood graph optimizer module to combine local graphs into maps describing the neighborhood of a robot. We demonstrate scalability and robustness through a simulated example, in which inference is consistently faster than a comparable naive approach.


Monday, December 06, 2010

Lab Meeting December 6th, 2010(Nicole): Acoustic Source Localization and Tracking Using Track Before Detect

Title: Acoustic Source Localization and Tracking Using Track Before Detect

Authors: Maurice F. Fallon, Simon Godsill

Particle Filter-based Acoustic Source Localization algorithms attempt to track the position of a sound source—one or more people speaking in a room—based on the current data from a microphone array as well as all previous data up to that point. This paper first discusses some of the inherent behavioral traits of the steered beamformer localization function. Using conclusions drawn from that study, a multitarget methodology for acoustic source tracking based on the Track Before Detect (TBD) framework is introduced. The algorithm also implicitly evaluates source activity using a variable appended to the state vector. Using the TBD methodology avoids the need to identify a set of source measurements and also allows for a vast increase in the number of particles used for a comparitive computational load which results in increased tracking stability in challenging recording environments. An evaluation of tracking performance is given using a set of real speech recordings with two simultaneously active speech sources.


Lab Meeting December 6th, 2010(KuoHuei): progress report

I will present my progress on Neighboring Objects Interaction models.