RAM++: Robust Representation Learning via Adaptive Mask for All-in-One Image Restoration

πŸ“… 2025-09-15
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing degradation-oriented methods suffer from significant limitations: poor robustness under extreme degradations (e.g., strongly coupled degradation-structure relationships), imbalanced cross-task performance, overfitting to known degradations, and weak generalization to unseen degradations. To address these issues, we propose RAM++, a two-stage unified image restoration framework that achieves content-adaptive and robust recovery by jointly leveraging high-level semantic understanding and low-level texture generation. Our key contributions are: (1) adaptive semantic-aware mask pretraining; (2) mask-attribute propagation fine-tuning; and (3) DINOv2-based robust feature regularization with efficient semantic-texture feature fusion. Extensive experiments demonstrate that RAM++ delivers balanced performance across diverse degradation types and achieves substantial improvements in generalization and restoration quality under extreme, unseen, and mixed degradations, establishing new state-of-the-art results.

Technology Category

Application Category

πŸ“ Abstract
This work presents Robust Representation Learning via Adaptive Mask (RAM++), a two-stage framework for all-in-one image restoration. RAM++ integrates high-level semantic understanding with low-level texture generation to achieve content-oriented robust restoration. It addresses the limitations of existing degradation-oriented methods in extreme scenarios (e.g., degradations strongly coupled with image structures). RAM++ also mitigates common challenges such as unbalanced performance across tasks, overfitting to seen degradations, and weak generalization to unseen ones through three key designs: 1) Adaptive Semantic-Aware Mask (AdaSAM): a pretraining strategy that applies pixel-level masks to semantically rich and textured regions. This design enables the network to learn both generative priors and image content priors from various degradations. 2) Mask Attribute Conductance (MAC): a selective fine-tuning strategy that adjusts the layers with higher contributions to bridge the integrity gap between masked pretraining and full-image fine-tuning while retaining learned priors. 3) Robust Feature Regularization (RFR): a strategy that leverages DINOv2's semantically consistent and degradation-invariant representations, together with efficient feature fusion, to achieve faithful and semantically coherent restoration. With these designs, RAM++ achieves robust, well-balanced, and state-of-the-art performance across seen, unseen, extreme, and mixed degradations. Our code and model will be released at https://github.com/DragonisCV/RAM
Problem

Research questions and friction points this paper is trying to address.

Robust restoration for extreme degradation scenarios
Balancing performance across diverse image tasks
Generalizing to unseen degradation types effectively
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive Semantic-Aware Mask pretraining strategy
Mask Attribute Conductance selective fine-tuning
Robust Feature Regularization with DINOv2
πŸ”Ž Similar Papers
No similar papers found.