Modeling Mutual Context of Object and Human Pose in Human-Object Interaction Activities
Bangpeng Yao
Li Fei-Fei
Abstract
Detecting objects in cluttered scenes and estimating articulated human body parts are two challenging problems in computer vision. We observe, however, that objects and human poses can serve as mutual context to each other – recognizing one facilitates the recognition of the other.
In this paper, we propose a new random field model to encode the mutual context of objects and human poses in human-object interaction activities. We then cast the model learning task as a structure learning problem, of which the structural connectivity between the object, the overall human pose and different body parts are estimated through a structure search approach, and the parameters of the model are estimated by a new max-margin algorithm.
On a sports data set of six classes of human-object interactions, we show that our mutual context model significantly outperforms state-of-the-art in detecting very difficult objects and human poses.
Paper Link
No comments:
Post a Comment