🤖 AI Summary
Existing all-in-one image restoration methods suffer from high inference overhead and poor generalization across diverse degradations. This paper proposes the first diffusion-based, single-step unified restoration framework. Our method employs degradation-aware LoRA to enable multi-degradation feature conditioning and incorporates a high-fidelity detail enhancement module in the decoder. Built upon the pre-trained Stable Diffusion model, it replaces full-parameter fine-tuning with low-rank adaptation, substantially reducing computational cost. Extensive experiments demonstrate that our approach surpasses existing diffusion-based methods in PSNR and SSIM across multiple benchmarks, while accelerating inference by 3–5×. Moreover, it achieves superior texture reconstruction and structural consistency. To the best of our knowledge, this is the first method to simultaneously deliver high-quality restoration and high inference efficiency within a single-step unified framework.
📝 Abstract
Diffusion models have revealed powerful potential in all-in-one image restoration (AiOIR), which is talented in generating abundant texture details. The existing AiOIR methods either retrain a diffusion model or fine-tune the pretrained diffusion model with extra conditional guidance. However, they often suffer from high inference costs and limited adaptability to diverse degradation types. In this paper, we propose an efficient AiOIR method, Diffusion Once and Done (DOD), which aims to achieve superior restoration performance with only one-step sampling of Stable Diffusion (SD) models. Specifically, multi-degradation feature modulation is first introduced to capture different degradation prompts with a pretrained diffusion model. Then, parameter-efficient conditional low-rank adaptation integrates the prompts to enable the fine-tuning of the SD model for adapting to different degradation types. Besides, a high-fidelity detail enhancement module is integrated into the decoder of SD to improve structural and textural details. Experiments demonstrate that our method outperforms existing diffusion-based restoration approaches in both visual quality and inference efficiency.