Authors: Ruonan Li, Parker Porfilio, Todd Zickler
Abstract:
We consider the problem of finding distinctive social interactions involving groups of agents embedded in larger social gatherings. Given a pre-defined gallery of short exemplar interaction videos, and a long input video of a large gathering (with approximately-tracked agents), we identify within the gathering small sub-groups of agents exhibiting social interactions that resemble those in the exemplars. The participants of each detected group interaction are localized in space; the extent of their interaction is localized in time; and when the gallery of exemplars is annotated with group-interaction categories, each detected interaction is classified into one of the pre-defined categories. Our approach represents group behaviors by dichotomous collections
of descriptors for (a) individual actions, and (b) pairwise interactions; and it includes efficient algorithms for
optimally distinguishing participants from by-standers in every temporal unit and for temporally localizing the extent of the group interaction. Most importantly, the method is generic and can be applied whenever numerous interacting agents can be approximately tracked over time. We evaluate the approach using three different video collections, two that involve humans and one that involves mice.
In: Computer Vision and Pattern Recognition(CVPR), 2013 IEEE Conference on. IEEE, 2013
Link: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6619195
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.