## Total Variation Blind Deconvolution: The Devil is in the Details

#### Paolo FavaroUniversität Bern

In the past decade a renewed major effort has been devoted to the problem of blind deconvolution. Many of current approaches are essentially built on an iterative alternating energy minimization where at each step either the sharp image or the blur function are reconstructed. Much of the success of these algorithms can be attributed to the use of sparse gradient priors. However, recent work of Levin et al. has showed that this class of algorithms suffers from a major shortcoming: They favor the no-blur solution, where the sharp image is the blurry input and the blur is a Dirac delta. In contrast, one can observe experimentally that these alternating minimization algorithms converge to the desired solution even when initialized with the no-blur one. We will show both analysis and experiments to resolve this paradox. We find that both observations are right. Our analysis is based on the most basic of these algorithms, which was already introduced in the early work of You and Kaveh in 1996 and later by Chang and Wong in 1998. We show that the procedure of Chang and Wong does not minimize the cost function it set out to do. In particular, the delayed scaling (normalization) in the iterative step of the blur kernel is fundamental to the convergence of the algorithm. We show that this small detail in the implementation is what allows eluding the no-blur solution, rather than current variants of Chang and Wong’s algorithm. We also introduce our own adaptation of this algorithm and show that, in spite of its extreme simplicity, it is very robust and achieves a performance on par with the state of the art.

Back to Computational Photography and Intelligent Cameras