Language-Grounded Multi-Domain Image Translation via Semantic Difference Guidance

📅 2026-01-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing methods struggle to accurately map the semantic differences conveyed by natural language prompts into corresponding visual changes in multi-domain image translation while preserving irrelevant content. To address this, this work proposes a semantic difference-guided mechanism that explicitly decomposes the semantic discrepancy between source and target prompts to generate attribute-level translation vectors, enabling fine-grained and composable cross-domain control. We design a GLIP-Adapter to integrate global semantics with local structural features and introduce a multi-domain control guidance mechanism that supports independent intensity modulation for each attribute and cross-modal alignment. Experiments on CelebA(Dialog) and BDD100K demonstrate that our approach outperforms existing baselines in terms of visual fidelity, structural consistency, and interpretability of domain-specific control.

Technology Category

Application Category

📝 Abstract
Multi-domain image-to-image translation re quires grounding semantic differences ex pressed in natural language prompts into corresponding visual transformations, while preserving unrelated structural and seman tic content. Existing methods struggle to maintain structural integrity and provide fine grained, attribute-specific control, especially when multiple domains are involved. We propose LACE (Language-grounded Attribute Controllable Translation), built on two compo nents: (1) a GLIP-Adapter that fuses global semantics with local structural features to pre serve consistency, and (2) a Multi-Domain Control Guidance mechanism that explicitly grounds the semantic delta between source and target prompts into per-attribute translation vec tors, aligning linguistic semantics with domain level visual changes. Together, these modules enable compositional multi-domain control with independent strength modulation for each attribute. Experiments on CelebA(Dialog) and BDD100K demonstrate that LACE achieves high visual fidelity, structural preservation, and interpretable domain-specific control, surpass ing prior baselines. This positions LACE as a cross-modal content generation framework bridging language semantics and controllable visual translation.
Problem

Research questions and friction points this paper is trying to address.

multi-domain image translation
semantic difference
language grounding
structural preservation
attribute-specific control
Innovation

Methods, ideas, or system contributions that make the work stand out.

Language-Grounded Translation
Semantic Difference Guidance
Multi-Domain Image Translation
Attribute-Controllable Generation
Cross-Modal Alignment
🔎 Similar Papers
2024-04-02arXiv.orgCitations: 0
J
Jongwon Ryu
Department of Artificial Intelligence, Chung-Ang University
J
Joonhyung Park
Hyundai Mobis
J
Jaeho Han
Department of Artificial Intelligence, Chung-Ang University
Y
Yeong-Seok Kim
Hyundai Mobis
H
Hye-rin Kim
Hyundai Mobis
Sunjae Yoon
Sunjae Yoon
KAIST
Deep LearningComputer VisionGenerative AI
Junyeong Kim
Junyeong Kim
Assistant Professor, Department of AI, Chung-Ang University
AICVNLPMultimodal Reasoning