Degradation-Aware All-in-One Image Restoration via Latent Prior Encoding

📅 2025-09-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Real-world images are often degraded by spatially heterogeneous factors such as haze, rain, snow, and low illumination. Existing “one-for-all” image restoration methods rely on external textual prompts or handcrafted priors (e.g., frequency-based heuristics), limiting their generalizability. This paper proposes the first implicit prior learning framework for unified image restoration, modeling restoration as a degradation-aware inference process in latent space—enabling adaptive feature selection, spatial localization, and content reconstruction. Our approach jointly designs an implicit prior encoder and a lightweight decoder, augmented with an adaptive feature routing mechanism, eliminating dependence on explicit prompts or manual priors. Extensive experiments demonstrate state-of-the-art performance across six single-degradation, five composite-degradation, and unseen degradation scenarios, achieving an average PSNR gain of 1.68 dB and a threefold speedup in inference time.

Technology Category

Application Category

📝 Abstract
Real-world images often suffer from spatially diverse degradations such as haze, rain, snow, and low-light, significantly impacting visual quality and downstream vision tasks. Existing all-in-one restoration (AIR) approaches either depend on external text prompts or embed hand-crafted architectural priors (e.g., frequency heuristics); both impose discrete, brittle assumptions that weaken generalization to unseen or mixed degradations. To address this limitation, we propose to reframe AIR as learned latent prior inference, where degradation-aware representations are automatically inferred from the input without explicit task cues. Based on latent priors, we formulate AIR as a structured reasoning paradigm: (1) which features to route (adaptive feature selection), (2) where to restore (spatial localization), and (3) what to restore (degradation semantics). We design a lightweight decoding module that efficiently leverages these latent encoded cues for spatially-adaptive restoration. Extensive experiments across six common degradation tasks, five compound settings, and previously unseen degradations demonstrate that our method outperforms state-of-the-art (SOTA) approaches, achieving an average PSNR improvement of 1.68 dB while being three times more efficient.
Problem

Research questions and friction points this paper is trying to address.

Addressing spatially diverse image degradations like haze and low-light
Overcoming limitations of text prompts and hand-crafted architectural priors
Enabling adaptive restoration without explicit task cues via latent priors
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learned latent prior inference for degradation-aware representations
Structured reasoning paradigm with adaptive feature routing
Lightweight decoding module using spatially-adaptive restoration cues
🔎 Similar Papers
No similar papers found.