Title: Matching Local Self-Similarities across Images and Videos
Author: Eli Shechtman Michal Irani
Abstract:
We present an approach for measuring similarity between
visual entities (images or videos) based on matching
internal self-similarities. What is correlated across
images (or across video sequences) is the internal layout
of local self-similarities (up to some distortions), even
though the patterns generating those local self-similarities
are quite different in each of the images/videos. These internal
self-similarities are efficiently captured by a compact
local “self-similarity descriptor”, measured densely
throughout the image/video, at multiple scales, while accounting
for local and global geometric distortions. This
gives rise to matching capabilities of complex visual data,
including detection of objects in real cluttered images using
only rough hand-sketches, handling textured objects with
no clear boundaries, and detecting complex actions in cluttered
video data with no prior learning. We compare our
measure to commonly used image-based and video-based
similarity measures, and demonstrate its applicability to object
detection, retrieval, and action detection.
fulltext
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.