🤖 AI Summary
Real-world images are often degraded by spatially heterogeneous factors such as haze, rain, snow, and low illumination. Existing “one-for-all” image restoration methods rely on external textual prompts or handcrafted priors (e.g., frequency-based heuristics), limiting their generalizability. This paper proposes the first implicit prior learning framework for unified image restoration, modeling restoration as a degradation-aware inference process in latent space—enabling adaptive feature selection, spatial localization, and content reconstruction. Our approach jointly designs an implicit prior encoder and a lightweight decoder, augmented with an adaptive feature routing mechanism, eliminating dependence on explicit prompts or manual priors. Extensive experiments demonstrate state-of-the-art performance across six single-degradation, five composite-degradation, and unseen degradation scenarios, achieving an average PSNR gain of 1.68 dB and a threefold speedup in inference time.
📝 Abstract
Real-world images often suffer from spatially diverse degradations such as haze, rain, snow, and low-light, significantly impacting visual quality and downstream vision tasks. Existing all-in-one restoration (AIR) approaches either depend on external text prompts or embed hand-crafted architectural priors (e.g., frequency heuristics); both impose discrete, brittle assumptions that weaken generalization to unseen or mixed degradations. To address this limitation, we propose to reframe AIR as learned latent prior inference, where degradation-aware representations are automatically inferred from the input without explicit task cues. Based on latent priors, we formulate AIR as a structured reasoning paradigm: (1) which features to route (adaptive feature selection), (2) where to restore (spatial localization), and (3) what to restore (degradation semantics). We design a lightweight decoding module that efficiently leverages these latent encoded cues for spatially-adaptive restoration. Extensive experiments across six common degradation tasks, five compound settings, and previously unseen degradations demonstrate that our method outperforms state-of-the-art (SOTA) approaches, achieving an average PSNR improvement of 1.68 dB while being three times more efficient.