Towards a Universal Image Degradation Model via Content-Degradation Disentanglement

📅 2025-05-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing image degradation synthesis models suffer from poor generalizability, reliance on hand-crafted parameters, and support for only a limited set of predefined degradation types—thus failing to capture the complex, realistic mix of homogeneous (global) and heterogeneous (spatially varying) degradations observed in practice. To address this, we propose the first universal, parameter-free image degradation model. Our method achieves unsupervised disentanglement of image content and degradation features for the first time; introduces a compression-driven disentanglement mechanism; and incorporates two novel modules explicitly modeling spatially non-uniform degradations. Built upon an end-to-end autoencoder architecture, the model enables automatic synthesis of diverse, high-fidelity degradations without manual intervention. Evaluated on film grain simulation and blind image restoration, our approach significantly improves degradation realism and downstream task generalization. It delivers plug-and-play, universal degradation synthesis—setting a new standard for data-efficient, realistic image corruption modeling.

Technology Category

Application Category

📝 Abstract
Image degradation synthesis is highly desirable in a wide variety of applications ranging from image restoration to simulating artistic effects. Existing models are designed to generate one specific or a narrow set of degradations, which often require user-provided degradation parameters. As a result, they lack the generalizability to synthesize degradations beyond their initial design or adapt to other applications. Here we propose the first universal degradation model that can synthesize a broad spectrum of complex and realistic degradations containing both homogeneous (global) and inhomogeneous (spatially varying) components. Our model automatically extracts and disentangles homogeneous and inhomogeneous degradation features, which are later used for degradation synthesis without user intervention. A disentangle-by-compression method is proposed to separate degradation information from images. Two novel modules for extracting and incorporating inhomogeneous degradations are created to model inhomogeneous components in complex degradations. We demonstrate the model's accuracy and adaptability in film-grain simulation and blind image restoration tasks. The demo video, code, and dataset of this project will be released upon publication at github.com/yangwenbo99/content-degradation-disentanglement.
Problem

Research questions and friction points this paper is trying to address.

Develops a universal model for diverse image degradation synthesis
Disentangles content and degradation features automatically without user input
Enhances accuracy in film-grain simulation and blind image restoration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Universal degradation model synthesizes diverse degradations
Disentangles homogeneous and inhomogeneous degradation features automatically
Introduces disentangle-by-compression and novel modules for inhomogeneous degradations
🔎 Similar Papers
No similar papers found.