Single Image Deblurring Using Motion Density Functions

Ankit Gupta [1], Neel Joshi [2], Larry Zitnick [2], Michael Cohen [1,2], Brian Curless [1]

[1] University of Washington, [2] Microsoft Research


Blurred image

Deblurred image (our result)

Deblurred using spatially-invariant method (Shan et al.)

Abstract

In this paper, we present a novel single image deblurring method to handle camera shake motion that leads to spatially nonuniform blur kernels. Existing spatially-invariant deconvolution methods are used in a local and robust way to initialize priors for portions of the latent image. The camera motion is represented as a Motion Density Function (MDF) which records the fraction of time spent in each discretized portion of the space of all possible camera poses. Spatially varying blur kernels can then be derived directly from the MDF. We specify sparsity and compactness priors over the MDF and formulate an optimization problem to iteratively solve for both the MDF and the deblurred image. We show that general 6D camera motion is well approximated by 3 degrees of motion (in-plane translation and rotation) and analyze the scope of this approximation. We present results on both synthetic and captured data. Our system out-performs the current state of the art approaches which makes the assumption of spatial invariance of the blur kernels.

Citation

Downloads


Data


Analysis

An important observation in our work is that the 6-dimensional camera motion can be approximated using 3 dimensions for typical camera shake. Here we provide validation for those approximations.

Acknowledgements

This work was supported by funding from by funding from the University of Washington Animation Research Labs, Microsoft, Adobe, and Pixar. We would like to thank Qi Shan for useful discussions about performance of existing deblurring methods and for providing non-blind image deblurring code from his research.

Contact

Send any comments or questions to Ankit Gupta (Email : ankit [at] cs [dot] washington [dot] edu)