Sunday, December 29, 2013

Lab Meeting, January 2nd, 2014 (Gene Chang): Zhou, Feng, and Fernando De la Torre. "Deformable Graph Matching." IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013

Title:
Deformable Graph Matching

Author:
Feng Zhou Fernando and De la Torre
Robotics Institute, Carnegie Mellon University, Pittsburgh, PA 15213

Abstract:
Graph matching (GM) is a fundamental problem in computer science, and it has been successfully applied to many problems in computer vision. Although widely used, existing GM algorithms cannot incorporate global consistence among nodes, which is a natural constraint in computer vision problems. This paper proposes deformable graph matching (DGM), an extension of GM for matching graphs subject to global rigid and non-rigid geometric constraints. The key idea of this work is a new factorization of the pair-wise affinity matrix. This factorization decouples the affinity matrix into the local structure of each graph and the pair-wise affinity edges. Besides the ability to incorporate global geometric transformations, this factorization offers three more benefits. First, there is no need to compute the costly (in space and time) pair-wise affinity matrix. Second, it provides a unified view of many GM methods and extends the standard iterative closest point algorithm. Third, it allows to use the path-following optimization algorithm that leads to improved optimization strategies and matching performance. Experimental results on synthetic and real databases illustrate how DGM outperforms state-of-the-art algorithms for GM. The code is available at http://humansensing.cs.cmu.edu/fgm.

From:
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013

Link

Thursday, December 26, 2013

Lab Meeting, December 26, 2013 (Tom Hsu): An Efficient Motion Segmentation Algorithm for Multibody RGB-D SLAM, (Proceedings of Australasian Conference on Robotics and Automation, 2-4 Dec 2013)

Title:
An Efficient Motion Segmentation Algorithm for Multibody RGB-D SLAM

Author:
Youbing Wang, Shoudong Huang
Faculty of Engineering and IT, University of Technology, Sydney, Australia

Abstract:
A simple motion segmentation algorithm using only two frames of RGB-D data is proposed, and both simulational and experimental segmentation results show its efficiency and reliability. To further verify its usability in multi-body SLAM scenarios, we firstly apply it to a simulated typical multi-body SLAM problem
with only a RGB-D camera, and then utilize it to segment a real RGB-D dataset collected by ourselves. Based on the good results of our motion segmentation algorithm, we can get satisfactory SLAM results for the simulated problem and the segmentation results using real data also enable us to get visual odometry for each
motion group thus facilitate the following steps to solve the practical multi-body RGB-D SLAM problems.

From:
Proceedings of Australasian Conference on Robotics and Automation, 2-4 Dec 2013, University of New South Wales, Sydney Australia

Link:
paper

Monday, December 16, 2013

Lab Meeting, December 19, 2013 (Yen-Ting): Deformable Spatial Pyramid Matching for Fast Dense Correspondences

Title: Deformable Spatial Pyramid Matching for Fast Dense Correspondences

Authors: Jaechul Kim, Ce Liu, Fei Sha and Kristen Grauman

Abstract: We introduce a fast deformable spatial pyramid (DSP) matching algorithm for computing dense pixel correspondences. Dense matching methods typically enforce both appearance agreement between matched pixels as well as geometric smoothness between neighboring pixels. Whereas the prevailing approaches operate at the pixel level, we propose a pyramid graph model that simultaneously regularizes match consistency at multiple spatial extents—ranging from an entire image, to coarse grid cells, to every single pixel. This novel regularization substantially improves pixel-level matching in the face of challenging image variations, while the “deformable” aspect of our model overcomes the strict rigidity of traditional spatial pyramids. Results on LabelMe and Caltech show our approach outperforms state-of-the-art methods (SIFT Flow [15] and PatchMatch [2]), both in terms of accuracy and run time.

P.S.
[2] C. Barnes, E. Shechtman, D. Goldman, and A. Finkelstein. The Generalized PatchMatch Correspondence Algorithm. In ECCV, 2010.
[15] C. Liu, J. Yuen, and A. Torralba. SIFT Flow: Dense Correspondence across Different Scenes and Its Applications. PAMI, 33(5), 2011.

From: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013

Link: Click here