🤖 AI Summary
Existing methods struggle to accurately map the semantic differences conveyed by natural language prompts into corresponding visual changes in multi-domain image translation while preserving irrelevant content. To address this, this work proposes a semantic difference-guided mechanism that explicitly decomposes the semantic discrepancy between source and target prompts to generate attribute-level translation vectors, enabling fine-grained and composable cross-domain control. We design a GLIP-Adapter to integrate global semantics with local structural features and introduce a multi-domain control guidance mechanism that supports independent intensity modulation for each attribute and cross-modal alignment. Experiments on CelebA(Dialog) and BDD100K demonstrate that our approach outperforms existing baselines in terms of visual fidelity, structural consistency, and interpretability of domain-specific control.
📝 Abstract
Multi-domain image-to-image translation re quires grounding semantic differences ex pressed in natural language prompts into corresponding visual transformations, while preserving unrelated structural and seman tic content. Existing methods struggle to maintain structural integrity and provide fine grained, attribute-specific control, especially when multiple domains are involved. We propose LACE (Language-grounded Attribute Controllable Translation), built on two compo nents: (1) a GLIP-Adapter that fuses global semantics with local structural features to pre serve consistency, and (2) a Multi-Domain Control Guidance mechanism that explicitly grounds the semantic delta between source and target prompts into per-attribute translation vec tors, aligning linguistic semantics with domain level visual changes. Together, these modules enable compositional multi-domain control with independent strength modulation for each attribute. Experiments on CelebA(Dialog) and BDD100K demonstrate that LACE achieves high visual fidelity, structural preservation, and interpretable domain-specific control, surpass ing prior baselines. This positions LACE as a cross-modal content generation framework bridging language semantics and controllable visual translation.