DoraCycle: Domain-Oriented Adaptation of Unified Generative Model in Multimodal Cycles

📅 2025-03-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of adapting generative models to complex domains where paired image-text data are scarce, this paper proposes the first unsupervised domain adaptation framework based on multimodal cycle consistency. Our method establishes bidirectional text↔image mappings and enforces cross-modal consistency through two cycles: text→image→text and image→text→image. We further introduce end-to-end unimodal cross-entropy optimization and unified generative model fine-tuning. Without any paired annotations, the framework achieves efficient style transfer. For tasks requiring partial semantic alignment—such as identity customization—it attains state-of-the-art performance using only a small number of paired samples (e.g., 10–20) alongside abundant unpaired data. To our knowledge, this is the first work to extend cycle consistency to multimodal generative domain adaptation, enabling diverse adaptation objectives while preserving model unity and architectural simplicity.

Technology Category

Application Category

📝 Abstract
Adapting generative models to specific domains presents an effective solution for satisfying specialized requirements. However, adapting to some complex domains remains challenging, especially when these domains require substantial paired data to capture the targeted distributions. Since unpaired data from a single modality, such as vision or language, is more readily available, we utilize the bidirectional mappings between vision and language learned by the unified generative model to enable training on unpaired data for domain adaptation. Specifically, we propose DoraCycle, which integrates two multimodal cycles: text-to-image-to-text and image-to-text-to-image. The model is optimized through cross-entropy loss computed at the cycle endpoints, where both endpoints share the same modality. This facilitates self-evolution of the model without reliance on annotated text-image pairs. Experimental results demonstrate that for tasks independent of paired knowledge, such as stylization, DoraCycle can effectively adapt the unified model using only unpaired data. For tasks involving new paired knowledge, such as specific identities, a combination of a small set of paired image-text examples and larger-scale unpaired data is sufficient for effective domain-oriented adaptation. The code will be released at https://github.com/showlab/DoraCycle.
Problem

Research questions and friction points this paper is trying to address.

Adapting generative models to complex domains with limited paired data.
Utilizing unpaired data for domain adaptation via multimodal cycles.
Enabling effective domain-oriented adaptation with minimal paired examples.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bidirectional mappings between vision and language
Multimodal cycles for unpaired data training
Cross-entropy loss optimization at cycle endpoints
🔎 Similar Papers
No similar papers found.
R
Rui Zhao
Show Lab, National University of Singapore
Weijia Mao
Weijia Mao
a phd student at National University of Singapore
computer vision3D generation and reconstruction
M
Mike Zheng Shou
Show Lab, National University of Singapore