Tuesday, August 26, 2014

Lab Meeting August 28th, 2014(Hung Chih Lu): Dynamic Scene Deblurring

Title:   Dynamic Scene Deblurring

Authors:  Tae Hyun Kim, Byeongjoo Ahn, and Kyoung Mu Lee

Most conventional single image deblurring methods assume that the underlying scene is static and the blur is
caused by only camera shake. In this paper, in contrast to this restrictive assumption, we address the deblurring problem of general dynamic scenes which contain multiple moving objects as well as camera shake. In case of dynamic scenes, moving objects and background have different blur motions, so the segmentation of the motion blur is required for deblurring each distinct blur motion accurately.
Thus, we propose a novel energy model designed with the weighted sum of multiple blur data models, which
estimates different motion blurs and their associated pixelwise weights, and resulting sharp image. In this framework, the local weights are determined adaptively and get high values when the corresponding data models have high data fidelity. And, the weight information is used for the segmentation
of the motion blur. Non-local regularization of weights are also incorporated to produce more reliable segmentation results. A convex optimization-based method is used for the solution of the proposed energy model. Experimental results demonstrate that our method outperforms conventional approaches in deblurring both dynamic scenes and static scenes.

ICCV 2013

Link: http://personal.ie.cuhk.edu.hk/~ccloy/files/iccv_2013_synopsis.pdf

Wednesday, August 20, 2014

Lab Meeting August 21th, 2014(Henry): Model Globally, Match Locally: Efficient and Robust 3D Object Recognition

Title:  Model Globally, Match Locally: Efficient and Robust 3D Object Recognition

Authors:  Bertram Drost1, Markus Ulrich, Nassir Navab, Slobodan Ilic

This paper addresses the problem of recognizing free-form 3D objects in point clouds. Compared to traditional approaches based on point descriptors, which depend on local information around points, we propose a novel method that creates a global model description based on oriented point pair features and matches that model locally using a fast voting scheme. The global model description consists of all model point pair features and represents a mapping from the point pair feature space to the model, where similar features on the model are grouped together. Such representation allows using much sparser object and scene point clouds, resulting in very fast performance. Recognition is done locally using an efficient voting scheme on a reduced two-dimensional search space. We demonstrate the efficiency of our approach and show its high recognition performance in the case of noise, clutter and partial occlusions. Compared to state of the art approaches we achieve better recognition rates, and demonstrate that with a slight or even no sacrifice of the recognition performance our method is much faster then the current state of the art approaches.

Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on