This Blog is maintained by the Robot Perception and Learning lab at CSIE, NTU, Taiwan.
Our scientific interests are driven by the desire to build intelligent robots and computers, which are capable of servicing people more efficiently than equivalent manned systems in a wide variety of dynamic and unstructured environments.
Scene semantics from long-term observation of people
Author: Delaitre, Vincent, David F. Fouhey, Ivan Laptev, Josef Sivic, Abhinav Gupta, and Alexei A. Efros.
Our everyday objects support various tasks and can be usedby people for different purposes. While object classification is a widelystudied topic in computer vision, recognition of object function, i.e., whatpeople can do with an object and how they do it, is rarely addressed.In this paper we construct a functional object description with the aimto recognize objects by the way people interact with them. We describescene objects (sofas, tables, chairs) by associated human poses and ob-ject appearance. Our model is learned discriminatively from automatically estimated body poses in many realistic scenes. In particular, wemake use of time-lapse videos from YouTube providing a rich source ofcommon human-object interactions and minimizing the effort of manual objectannotation. We show how the models learned from humanobservations significantly improve object recognition and enable prediction of characteristic human poses in new scenes. Results are shown on adataset of more than 400,000 frames obtained from 146 time-lapse videosof challenging and realistic indoor scenes.
Title: Unstructured Human Activity Detection from RGBD Images Authors: Jaeyong Sung Dept. of Comput. Sci., Cornell Univ., Ithaca, NY, USA Ponce, C. ; Selman, B. ; Saxena, A.
Being able to detect and recognize humanactivities is essential for several applications, including personal assistive robotics. In this paper, we perform detection and recognition of unstructuredhumanactivity in unstructured environments. We use a RGBD sensor (Microsoft Kinect) as the input sensor, and compute a set of features based on human pose and motion, as well as based on image and point-cloud information. Our algorithm is based on a hierarchical maximum entropy Markov model (MEMM), which considers a person's activity as composed of a set of sub-activities. We infer the two-layered graph structure using a dynamic programming approach. We test our algorithm on detecting and recognizing twelve different activities performed by four people in different environments, such as a kitchen, a living room, an office, etc., and achieve good performance even when the person was not seen before in the training set.
2012 IEEE International Conference on Robotics and Automation (ICRA)
Mining actionlet ensemble for action recognition with depth cameras
Jiang Wang ; Zicheng Liu ; Ying Wu ; Junsong Yuan
Human action recognition is an important yet challenging task. The
recently developed commodity depth sensors open up new possibilities of
dealing with this problem but also present some unique challenges. The
depth maps captured by the depth cameras are very noisy and the 3D
positions of the tracked joints may be completely wrong if serious
occlusions occur, which increases the intra-class variations in the
actions. In this paper, an actionlet ensemble model is learnt to
represent each action and to capture the intra-class variance. In
addition, novel features that are suitable for depth data are proposed.
They are robust to noise, invariant to translational and temporal
misalignments, and capable of characterizing both the human motion and
the human-object interactions. The proposed approach is evaluated on two
challenging action recognition datasets captured by commodity depth
cameras, and another dataset captured by a MoCap system. The
experimental evaluations show that the proposed approach achieves
superior performance to the state of the art algorithms.
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2012 Link