🤖 AI Summary
Extreme blind image restoration (EBIR) suffers from domain shift, artifacts, and detail loss when confronted with severe composite degradations beyond the training distribution. To address this, we propose a two-stage decoupled restoration framework grounded in information bottleneck theory: first projecting extremely low-quality (ELQ) images onto an intermediate low-quality manifold, then reconstructing high-fidelity outputs via a frozen pre-trained blind image restoration (BIR) model. Our key innovation is a theoretically motivated information bottleneck loss that jointly enforces reconstruction fidelity and prior alignment, enabling plug-and-play model enhancement without fine-tuning and supporting single-pass, inference-time prompt optimization. Experiments demonstrate substantial improvements in restoration quality across diverse extreme degradation scenarios, with effective artifact suppression, faithful texture preservation, strong stability, and robust cross-task generalization.
📝 Abstract
Blind Image Restoration (BIR) methods have achieved remarkable success but falter when faced with Extreme Blind Image Restoration (EBIR), where inputs suffer from severe, compounded degradations beyond their training scope. Directly learning a mapping from extremely low-quality (ELQ) to high-quality (HQ) images is challenging due to the massive domain gap, often leading to unnatural artifacts and loss of detail. To address this, we propose a novel framework that decomposes the intractable ELQ-to-HQ restoration process. We first learn a projector that maps an ELQ image onto an intermediate, less-degraded LQ manifold. This intermediate image is then restored to HQ using a frozen, off-the-shelf BIR model. Our approach is grounded in information theory; we provide a novel perspective of image restoration as an Information Bottleneck problem and derive a theoretically-driven objective to train our projector. This loss function effectively stabilizes training by balancing a low-quality reconstruction term with a high-quality prior-matching term. Our framework enables Look Forward Once (LFO) for inference-time prompt refinement, and supports plug-and-play strengthening of existing image restoration models without need for finetuning. Extensive experiments under severe degradation regimes provide a thorough analysis of the effectiveness of our work.