Speaker : Rob Fergus from MIT
Abstract:
Camera shake during exposure leads to objectionable image blur and ruins
many photographs. Conventional blind deconvolution methods typically
assume frequency domain constraints on images, or overly simplied
parametric forms for the motion path during camera shake. Real camera
motions can follow convoluted paths, and a spatial domain prior can better
maintain visually salient image characteristics. We introduce a method to
remove the effects of camera shake from seriously blurred images. The
method assumes a uniform camera blur over the image, negligible in-plane
camera rotation, and no blur due to moving objects in the scene. The user
must specify an image region without saturation effects. I'll discuss
issues in this blind deconvolution problem, and show results for a variety
of digital photographs.
Invitation: I invite audience members to submit a few examples of
motion-blurred photographs to me a few days before the talk. I'll show
the examples and our algorithm's output on these examples during the
talk. Make sure that the images have blur due to camera motion, rather
than just being out-of-focus. If you have a favorite blind deconvolution
algorithm, you can also send me that algorithm's result and I'll show that
too.
Joint work with Aaron Hertzmann, Bill Freeman, Sam Roweis, and
Barun Singh.
Short Bio:
Dr. Rob Fergus is currently a post-doc with Prof. William Freeman at MIT
in the Computer Science and Artificial Intelligence Lab (CSAIL). He
recently graduated from Prof. Andrew Zisserman's group at Oxford, where
he collaborated closely with Prof. Pietro Perona at Caltech. Rob's
research is within the field of Computer Vision; more specifically his
interests include: probabilistic models for object category recognition;
methods for learning from noisy data and efficient computational methods
within vision.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.