Articulated pose estimation with flexible mixtures-of-parts
Yi Yang
Deva Ramanan
Abstract
We describe a method for human pose estimation in static images based on a novel representation of part models. Notably, we do not use articulated limb parts, but rather capture orientation with a mixture of templates for each part. We describe a general, flexible mixture model for capturing contextual co-occurrence relations between parts, augmenting standard spring models that encode spatial relations. We show that such relations can capture notions of local rigidity. When co-occurrence and spatial relations are tree-structured, our model can be efficiently optimized with dynamic programming. We present experimental results on standard benchmarks for pose estimation that indicate our approach is the state-of-the-art system for pose estimation, outperforming past work by 50% while being orders of magnitude faster.
Paper Link
This Blog is maintained by the Robot Perception and Learning lab at CSIE, NTU, Taiwan. Our scientific interests are driven by the desire to build intelligent robots and computers, which are capable of servicing people more efficiently than equivalent manned systems in a wide variety of dynamic and unstructured environments.
Tuesday, May 31, 2011
Monday, May 16, 2011
ICRA 2011 Awards
Best Manipulation Paper
Best Vision Paper
Best Automation Paper
Best Medical Robotics Paper
Best Conference Paper
KUKA Service Robotics Best Paper
Best Video
Best Cognitive Robotics Paper
- WINNER! Characterization of Oscillating Nano Knife for Single Cell Cutting by Nanorobotic Manipulation System Inside ESEM: Yajing Shen, Masahiro Nakajima, Seiji Kojima, Michio Homma, Yasuhito Ode, Toshio Fukuda [pdf]
- Wireless Manipulation of Single Cells Using Magnetic Microtransporters: Mahmut Selman Sakar, Edward Steager, Anthony Cowley, Vijay Kumar, George J Pappas
- Hierarchical Planning in the Now: Leslie Kaelbling, Tomas Lozano-Perez
- Selective Injection and Laser Manipulation of Nanotool Inside a Specific Cell Using Optical Ph Regulation and Optical Tweezers: Hisataka Maruyama, Naoya Inoue, Taisuke Masuda, Fumihito Arai
- Configuration-Based Optimization for Six Degree-Of-Freedom Haptic Rendering for Fine Manipulation: Dangxiao Wang, Xin Zhang, Yuru Zhang, Jing Xiao
Best Vision Paper
- Model-Based Localization of Intraocular Microrobots for Wireless Electromagnetic Control: Christos Bergeles, Bradley Kratochvil, Bradley J. Nelson
- Fusing Optical Flow and Stereo in a Spherical Depth Panorama Using a Single-Camera Folded Catadioptric Rig: Igor Labutov, Carlos Jaramillo, Jizhong Xiao
- 3-D Scene Analysis Via Sequenced Predictions Over Points and Regions: Xuehan Xiong, Daniel Munoz, James Bagnell, Martial Hebert
- Fast and Accurate Computation of Surface Normals from Range Images: Hernan Badino, Daniel Huber, Yongwoon Park, Takeo Kanade
- WINNER! Sparse Distance Learning for Object Recognition Combining RGB and Depth Information: Kevin Lai, Liefeng Bo, Xiaofeng Ren, Dieter Fox [pdf]
Best Automation Paper
- WINNER! Automated Cell Manipulation: Robotic ICSI: Zhe Lu, Xuping Zhang, Clement Leung, Navid Esfandiari, Robert Casper, Yu Sun [pdf]
- Efficient AUV Navigation Fusing Acoustic Ranging and Side-Scan Sonar: Maurice Fallon, Michael Kaess, Hordur Johannsson, John Leonard
- Vision-Based 3D Bicycle Tracking Using Deformable Part Model and Interacting Multiple Model Filter: Hyunggi Cho, Paul E. Rybski, Wende Zhang
- High-Accuracy GPS and GLONASS Positioning by Multipath Mitigation Using Omnidirectional Infrared Camera: Taro Suzuki, Mitsunori Kitamura, Yoshiharu Amano, Takumi Hashizume
- Deployment of a Point and Line Feature Localization System for an Outdoor Agriculture Vehicle: Jacqueline Libby, George Kantor
Best Medical Robotics Paper
- Design of Adjustable Constant-Force Forceps for Robot-Assisted Surgical Manipulation: Chao-Chieh Lan, Jung-Yuan Wang
- Design Optimization of Concentric Tube Robots Based on Task and Anatomical Constraints: Chris Bedell, Jesse Lock, Andrew Gosline, Pierre Dupont
- GyroLock - First in Vivo Experiments of Active Heart Stabilization Using Control Moment Gyro (CMG): Julien Gagne, Olivier Piccin, Edouard Laroche, Michele Diana, Jacques Gangloff
- Metal MEMS Tools for Beating-Heart Tissue Approximation: Evan Butler, Chris Folk, Adam Cohen, Nikolay Vasilyev, Rich Chen, Pedro del Nido, Pierre Dupont
- WINNER! An Articulated Universal Joint Based Flexible Access Robot for Minimally Invasive Surgery: Jianzhong Shang, David Noonan, Christopher Payne, James Clark, Mikael Hans Sodergren, Ara Darzi, Guang-Zhong Yang [pdf]
Best Conference Paper
- WINNER! Minimum Snap Trajectory Generation and Control for Quadrotors: Daniel Mellinger, Vijay Kumar [pdf]
- Autonomous Multi-Floor Indoor Navigation with a Computationally Constrained Micro Aerial Vehicle: Shaojie Shen, Nathan Michael, Vijay Kumar
- Dexhand : A Space Qualfied Multi-Fingered Robotic Hand: Maxime Chalon, Armin Wedler, Andreas Baumann, Wieland Bertleff, Alexander Beyer, Jörg Butterfass, Markus Grebenstein, Robin Gruber, Franz Hacker, Erich Krämer, Klaus Landzettel, Maximilian Maier, Hans-Juergen Sedlmayr, Nikolaus Seitz, Fabian Wappler, Bertram Willberg, Thomas Wimboeck, Frederic Didot, Gerd Hirzinger
- Time Scales and Stability in Networked Multi-Robot Systems: Mac Schwager, Nathan Michael, Vijay Kumar, Daniela Rus
- Bootstrapping Bilinear Models of Robotic Sensorimotor Cascades: Andrea Censi, Richard Murray
KUKA Service Robotics Best Paper
- Distributed Coordination and Data Fusion for Underwater Search: Geoffrey Hollinger, Srinivas Yerramalli, Sanjiv Singh, Urbashi Mitra, Gaurav Sukhatme
- WINNER! Dynamic Shared Control for Human-Wheelchair Cooperation: Qinan Li, Weidong Chen, Jingchuan Wang [pdf]
- Towards Joint Attention for a Domestic Service Robot -- Person Awareness and Gesture Recognition Using Time-Of-Flight Cameras: David Droeschel, Jorg Stuckler, Dirk Holz, Sven Behnke
- Electromyographic Evaluation of Therapeutic Massage Effect Using Multi-Finger Robot Hand: Ren C. Luo, Chih-Chia Chang
Best Video
- Catching Flying Balls and Preparing Coffee: Humanoid Rollin'Justin Performs Dynamic and Sensitive Tasks: Berthold Baeuml, Florian Schmidt, Thomas Wimboeck, Oliver Birbach, Alexander Dietrich, Matthias Fuchs, Werner Friedl, Udo Frese, Christoph Borst, Markus Grebenstein, Oliver Eiberger, Gerd Hirzinger
- Recent Advances in Quadrotor Capabilities: Daniel Mellinger, Nathan Michael, Michael Shomin, Vijay Kumar
- WINNER! High Performance of Magnetically Driven Microtools with Ultrasonic Vibration for Biomedical Innovations: Masaya Hagiwara, Tomohiro Kawahara, Lin Feng, Yoko Yamanishi, Fumihito Arai [pdf]
Best Cognitive Robotics Paper
- WINNER! Donut As I Do: Learning from Failed Demonstrations: Daniel Grollman, Aude Billard [pdf]
- A Discrete Computational Model of Sensorimotor Contingencies for Object Perception and Control of Behavior: Alexander Maye, Andreas Karl Engel
- Skill Learning and Task Outcome Prediction for Manipulation: Peter Pastor, Mrinal Kalakrishnan, Sachin Chitta, Evangelos Theodorou, Stefan Schaal
- Integrating Visual Exploration and Visual Search in Robotic Visual Attention: The Role of Human-Robot Interaction: Momotaz Begum, Fakhri Karray
Tuesday, May 03, 2011
Lab Meeting May 3rd (Andi): Face/Off: Live Facial Puppetry
Thibaut Weise, Hao Li, Luc Van Gool, Mark Pauly
We present a complete integrated system for live facial puppetry that enables high-resolution real-time facial expression tracking with transfer to another person's face. The system utilizes a real-time structured light scanner that provides dense 3D data and texture. A generic template mesh, fitted to a rigid reconstruction of the actor's face, is tracked offline in a training stage through a set of expression sequences. These sequences are used to build a person-specific linear face model that is subsequently used for online face tracking and expression transfer. Even with just a single rigid pose of the target face, convincing real-time facial animations are achievable. The actor becomes a puppeteer with complete and accurate control over a digital face.
Monday, May 02, 2011
Lab Meeting May 3( KuenHan ), Multiple Targets Tracking in World Coordinate with a Single, Minimally Calibrated Camera (ECCV 2010)
Author: Wongun Choi, Silvio Savarese.
Abstract:
Tracking multiple objects is important in many application
domains. We propose a novel algorithm for multi-object tracking that
is capable of working under very challenging conditions such as min-
imal hardware equipment, uncalibrated monocular camera, occlusions
and severe background clutter. To address this problem we propose a
new method that jointly estimates object tracks, estimates correspond-
ing 2D/3D temporal trajectories in the camera reference system as well
as estimates the model parameters (pose, focal length, etc) within a
coherent probabilistic formulation. Since our goal is to estimate stable
and robust tracks that can be univocally associated to the object IDs,
we propose to include in our formulation an interaction (attraction and
repulsion) model that is able to model multiple 2D/3D trajectories in
space-time and handle situations where objects occlude each other. We
use a MCMC particle ltering algorithm for parameter inference and
propose a solution that enables accurate and e cient tracking and cam-
era model estimation. Qualitative and quantitative experimental results
obtained using our own dataset and the publicly available ETH dataset
shows very promising tracking and camera estimation results.
Link
Website
Subscribe to:
Posts (Atom)