Decouple to Reconstruct: High Quality UHD Restoration via Active Feature Disentanglement and Reversible Fusion

📅 2025-03-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high computational cost, inherent information loss in variational autoencoders (VAEs), and incomplete degradation removal caused by entanglement between degradation components and background in ultra-high-definition (UHD) image restoration, this paper proposes a controllable micro-decomposition disentangled VAE framework. Our method introduces three key innovations: (1) hierarchical contrastive disentanglement learning to explicitly separate degradation features from background representations in the latent space; (2) an orthogonal gated projection module that enforces orthogonality among latent variables to enhance disentanglement robustness; and (3) a complex-valued invertible multi-scale fusion network ensuring cross-scale background consistency and high-fidelity degradation feature reconstruction. Evaluated on six UHD image restoration tasks, our approach achieves state-of-the-art performance with only 1 million parameters, significantly improving detail preservation and degradation removal while effectively alleviating the VAE information bottleneck.

Technology Category

Application Category

📝 Abstract
Ultra-high-definition (UHD) image restoration often faces computational bottlenecks and information loss due to its extremely high resolution. Existing studies based on Variational Autoencoders (VAE) improve efficiency by transferring the image restoration process from pixel space to latent space. However, degraded components are inherently coupled with background elements in degraded images, both information loss during compression and information gain during compensation remain uncontrollable. These lead to restored images often exhibiting image detail loss and incomplete degradation removal. To address this issue, we propose a Controlled Differential Disentangled VAE, which utilizes Hierarchical Contrastive Disentanglement Learning and an Orthogonal Gated Projection Module to guide the VAE to actively discard easily recoverable background information while encoding more difficult-to-recover degraded information into the latent space. Additionally, we design a Complex Invertible Multiscale Fusion Network to handle background features, ensuring their consistency, and utilize a latent space restoration network to transform the degraded latent features, leading to more accurate restoration results. Extensive experimental results demonstrate that our method effectively alleviates the information loss problem in VAE models while ensuring computational efficiency, significantly improving the quality of UHD image restoration, and achieves state-of-the-art results in six UHD restoration tasks with only 1M parameters.
Problem

Research questions and friction points this paper is trying to address.

Addresses computational bottlenecks in UHD image restoration
Reduces information loss and incomplete degradation removal
Improves quality of UHD image restoration with fewer parameters
Innovation

Methods, ideas, or system contributions that make the work stand out.

Controlled Differential Disentangled VAE for UHD restoration
Hierarchical Contrastive Disentanglement Learning for feature separation
Complex Invertible Multiscale Fusion Network for feature consistency
🔎 Similar Papers
No similar papers found.
Yidi Liu
Yidi Liu
University of Science and Technology of China
computer vision
D
Dong Li
University of Science and Technology of China
Y
Yuxin Ma
University of Science and Technology of China
J
Jie Huang
University of Science and Technology of China
W
Wenlong Zhang
Shanghai AI Laboratory
X
Xueyang Fu
University of Science and Technology of China
Z
Zheng-jun Zha
University of Science and Technology of China