🤖 AI Summary
Blind motion deblurring (BMD) suffers from highly non-convex optimization landscapes and strong sensitivity to initial blur kernel estimates, severely limiting the performance of deep prior-based methods. To address this, we propose a generative latent-variable kernel modeling framework: a GAN learns the prior distribution of blur kernels to provide high-quality initialization, while optimization is constrained to a compact latent kernel manifold, significantly improving stability and robustness. This work is the first to deeply integrate generative models with kernel initialization, establishing a plug-and-play latent kernel manifold constraint mechanism. We further extend the framework to spatially variant deblurring without requiring additional priors. Our method jointly estimates the blur kernel and latent sharp image in an end-to-end manner. It achieves state-of-the-art performance across multiple benchmarks, particularly excelling in complex motion blur scenarios with superior restoration accuracy and generalization compared to existing approaches.
📝 Abstract
Deep prior-based approaches have demonstrated remarkable success in blind motion deblurring (BMD) recently. These methods, however, are often limited by the high non-convexity of the underlying optimization process in BMD, which leads to extreme sensitivity to the initial blur kernel. To address this issue, we propose a novel framework for BMD that leverages a deep generative model to encode the kernel prior and induce a better initialization for the blur kernel. Specifically, we pre-train a kernel generator based on a generative adversarial network (GAN) to aptly characterize the kernel's prior distribution, as well as a kernel initializer to provide a well-informed and high-quality starting point for kernel estimation. By combining these two components, we constrain the BMD solution within a compact latent kernel manifold, thus alleviating the aforementioned sensitivity for kernel initialization. Notably, the kernel generator and initializer are designed to be easily integrated with existing BMD methods in a plug-and-play manner, enhancing their overall performance. Furthermore, we extend our approach to tackle blind non-uniform motion deblurring without the need for additional priors, achieving state-of-the-art performance on challenging benchmark datasets. The source code is available at https://github.com/dch0319/GLKM-Deblur.