Wednesday, March 27, 2013

Lab Meeting, March 28, 2013 (Chiang Yi): Efficient Model-based 3D Tracking of Hand Articulations using Kinect (BMVC 2011)

Authors: Iason Oikonomidis, Nikolaos Kyriazis
 ,Antonis A. Argyros

Abstract:
We present a novel solution to the problem of recovering and tracking the 3D po-
sition, orientation and full articulation of a human hand from markerless visual obser-
vations obtained by a Kinect sensor. We treat this as an optimization problem, seeking
for the hand model parameters that minimize the discrepancy between the appearance
and 3D structure of hypothesized instances of a hand model and actual hand observa-
tions. This optimization problem is effectively solved using a variant of Particle Swarm
Optimization (PSO). The proposed method does not require special markers and/or a
complex image acquisition setup. Being model based, it provides continuous solutions
to the problem of tracking hand articulations. Extensive experiments with a prototype
GPU-based implementation of the proposed method demonstrate that accurate and ro-
bust 3D tracking of hand articulations can be achieved in near real-time (15Hz).


LINK

extended work: Tracking the articulated motion of two strongly interacting hands

Tuesday, March 19, 2013

Lab Meeting, March 21, 2013 (Yen-Ting): Extracting 3D Scene-Consistent Object Proposals and Depth from Stereo Images (ECCV 2012)

Authors: Michael Bleyer, Christoph Rhemann, and Carsten Rother

Abstract: This work combines two active areas of research in computer vision: unsupervised object extraction from a single image, and depth estimation from a stereo image pair. A recent, successful trend in unsupervised object extraction is to exploit so-called “3D scene-consistency”, that is enforcing that objects obey underlying physical constraints of the 3D scene, such as occupancy of 3D space and gravity of objects. Our main contribution is to introduce the concept of 3D scene-consistency into stereo matching. We show that this concept is beneficial for both tasks, object extraction and depth estimation. In particular, we demonstrate that our approach is able to create a large set of 3D scene-consistent object proposals, by varying e.g. the prior on the number of objects...

Link

Thursday, March 14, 2013

Lab Meeting, March 14, 2013 (Channing): The Design of LEO: a 2D Bipedal Walking Robot for Online Autonomous Reinforcement Learning (IROS 2010)

Authors: Erik Schuitema, Martijn Wisse, Thijs Ramakers and Pieter Jonker

Abstract: Real robots demonstrating online Reinforcement Learning (RL) to learn new tasks are hard to find. The specific properties and limitations of real robots have a large impact on their suitability for RL experiments. In this work, we derive the main hardware and software requirements that a RL robot should fulfill, and present our biped robot LEO that was specifically designed to meet these requirements. We verify its aptitude in autonomous walking experiments using a pre-programmed controller. Although there is room
for improvement in the design, the robot was able to walk, fall and stand up without human intervention for 8 hours, during which it made over 43,000 footsteps.

Link

Wednesday, March 13, 2013

Lab Meeting, March 7, 2013 (Benny):A Segmentation and Data Association Annotation System for Laser-based Multi-Target Tracking Evaluation

Author: Chien-Chen Weng, Chieh-Chih Wang and Jennifer Healey

Abstract—2D laser scanners are now widely used to accomplish robot perception tasks such as SLAM and multi-target tracking (MTT). While a number of SLAM benchmarking datasets are available, only a few works have discussed the issues of collecting multi-target tracking benchmarking datasets.
In this work, a segmentation and data association annotation system is proposed for evaluating multi-target tracking using 2D laser scanners. The proposed annotation system uses the existing MTT algorithm to generate initial annotation results and uses camera images as the strong hints to assist annotators to recognize moving objects in laser scans. The annotators can draw the object’s shape and future trajectory to automate segmentation and data association and reduce the annotation task loading. The user study results show that the performance of the proposed annotation system is superior in the V-measure vs. annotation speed tests and the false positive and false negative rates.