Multiple Noises in Diffusion Model for Semi-Supervised Multi-Domain Translation

📅 2023-09-25
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the inefficiency of retraining separate models for arbitrary cross-domain image translation—bidirectional, one-to-many, and many-to-one—in semi-supervised multi-domain settings. We propose a unified conditional diffusion framework that assigns each domain an independent noise schedule; missing domains are naturally modeled as high-noise states, enabling semantic-guided progressive reconstruction across domains. Key technical contributions include multi-level domain-specific noise scheduling, a semi-supervised loss, and a cross-domain noise masking mechanism. Evaluated on multi-domain synthetic benchmarks, our method significantly improves generalization under low-label regimes and enhances configuration flexibility. Notably, it enables complex semantic domain inversion—e.g., reversing translation directions—without retraining, a capability unattainable with conventional multi-model paradigms. This breakthrough overcomes longstanding efficiency and scalability bottlenecks in multi-domain image translation.
📝 Abstract
Domain-to-domain translation involves generating a target domain sample given a condition in the source domain. Most existing methods focus on fixed input and output domains, i.e. they only work for specific configurations (i.e. for two domains, either $D_1 ightarrow{}D_2$ or $D_2 ightarrow{}D_1$). This paper proposes Multi-Domain Diffusion (MDD), a conditional diffusion framework for multi-domain translation in a semi-supervised context. Unlike previous methods, MDD does not require defining input and output domains, allowing translation between any partition of domains within a set (such as $(D_1, D_2) ightarrow{}D_3$, $D_2 ightarrow{}(D_1, D_3)$, $D_3 ightarrow{}D_1$, etc. for 3 domains), without the need to train separate models for each domain configuration. The key idea behind MDD is to leverage the noise formulation of diffusion models by incorporating one noise level per domain, which allows missing domains to be modeled with noise in a natural way. This transforms the training task from a simple reconstruction task to a domain translation task, where the model relies on less noisy domains to reconstruct more noisy domains. We present results on a multi-domain (with more than two domains) synthetic image translation dataset with challenging semantic domain inversion.
Problem

Research questions and friction points this paper is trying to address.

Handling arbitrary multi-domain translation configurations
Reconstructing missing views for new data objects
Enabling semi-supervised learning with arbitrary supervision
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multiple noises per domain diffusion
Semi-supervised learning via noise representation
Single model handles arbitrary domain configurations
🔎 Similar Papers
No similar papers found.
T
Tsiry Mayet
LITIS UR 4108, F-76000, INSA Rouen Normandie, Rouen, France
Simon Bernard
Simon Bernard
Associate Professor (MCF), LITIS lab, Université Rouen Normandie, France
Machine LearningPattern RecognitionEnsemble LearningRandom Forests
C
Clément Chatelain
LITIS UR 4108, F-76000, INSA Rouen Normandie, Rouen, France
R
R. Herault
LITIS UR 4108, F-76000, INSA Rouen Normandie, Rouen, France