π€ AI Summary
Existing image deblurring methods are typically designed for single blur types, exhibiting limited generalization and poor performance on real-world mixed or heterogeneous blur degradations. To address this, we propose the first unified, general-purpose deblurring framework capable of jointly handling global motion, local motion, low-light, and defocus blur. Our core innovation is a Mixture-of-Experts (MoE) decoding module integrated with a deep feature-driven dynamic routing mechanism, enabling blur-type-aware adaptive feature allocation and end-to-end restoration. Extensive experiments demonstrate that our method matches or exceeds state-of-the-art specialized models on known blur categories, while achieving significantly superior generalization and robustness on unseen blur types. This work establishes a new paradigm for practical, single-model, multi-scenario deblurring.
π Abstract
Image deblurring, removing blurring artifacts from images, is a fundamental task in computational photography and low-level computer vision. Existing approaches focus on specialized solutions tailored to particular blur types, thus, these solutions lack generalization. This limitation in current methods implies requiring multiple models to cover several blur types, which is not practical in many real scenarios. In this paper, we introduce the first all-in-one deblurring method capable of efficiently restoring images affected by diverse blur degradations, including global motion, local motion, blur in low-light conditions, and defocus blur. We propose a mixture-of-experts (MoE) decoding module, which dynamically routes image features based on the recognized blur degradation, enabling precise and efficient restoration in an end-to-end manner. Our unified approach not only achieves performance comparable to dedicated task-specific models, but also demonstrates remarkable robustness and generalization capabilities on unseen blur degradation scenarios.