DiffDecompose: Layer-Wise Decomposition of Alpha-Composited Images via Diffusion Transformers

📅 2025-05-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing image decomposition methods struggle with nonlinear alpha compositing occlusions involving translucent/transparent layers, primarily due to reliance on mask priors, assumptions of static objects, and the absence of dedicated benchmarks. This paper introduces layered decomposition of alpha-composited images—a novel task aiming to unsupervisedly recover physically interpretable foreground and background layers from a single composite input. Our key contributions are: (1) AlphaBlend, the first large-scale dataset for transparent-layer decomposition; (2) an in-context decomposition paradigm eliminating per-layer supervision; and (3) a Layer Position Encoding Cloning mechanism ensuring cross-layer pixel consistency. Integrating diffusion models, Transformer architectures, semantic prompt guidance, and explicit alpha modeling, our method achieves significant improvements over state-of-the-art approaches on both AlphaBlend and LOGO datasets, enabling multi-layer collaborative reconstruction across six real-world translucent scenarios—including glassware and biological cells.

Technology Category

Application Category

📝 Abstract
Diffusion models have recently motivated great success in many generation tasks like object removal. Nevertheless, existing image decomposition methods struggle to disentangle semi-transparent or transparent layer occlusions due to mask prior dependencies, static object assumptions, and the lack of datasets. In this paper, we delve into a novel task: Layer-Wise Decomposition of Alpha-Composited Images, aiming to recover constituent layers from single overlapped images under the condition of semi-transparent/transparent alpha layer non-linear occlusion. To address challenges in layer ambiguity, generalization, and data scarcity, we first introduce AlphaBlend, the first large-scale and high-quality dataset for transparent and semi-transparent layer decomposition, supporting six real-world subtasks (e.g., translucent flare removal, semi-transparent cell decomposition, glassware decomposition). Building on this dataset, we present DiffDecompose, a diffusion Transformer-based framework that learns the posterior over possible layer decompositions conditioned on the input image, semantic prompts, and blending type. Rather than regressing alpha mattes directly, DiffDecompose performs In-Context Decomposition, enabling the model to predict one or multiple layers without per-layer supervision, and introduces Layer Position Encoding Cloning to maintain pixel-level correspondence across layers. Extensive experiments on the proposed AlphaBlend dataset and public LOGO dataset verify the effectiveness of DiffDecompose. The code and dataset will be available upon paper acceptance. Our code will be available at: https://github.com/Wangzt1121/DiffDecompose.
Problem

Research questions and friction points this paper is trying to address.

Decompose alpha-composited images into layers with transparency
Address layer ambiguity and data scarcity in decomposition tasks
Enable multi-layer prediction without per-layer supervision
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses diffusion Transformers for layer decomposition
Introduces AlphaBlend dataset for transparent layers
Employs Layer Position Encoding Cloning technique
🔎 Similar Papers
No similar papers found.