IntraStyler: Exemplar-based Style Synthesis for Cross-modality Domain Adaptation

๐Ÿ“… 2026-01-01
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the limited style diversity in the target domain within cross-modal unsupervised domain adaptation by proposing an example-guided style synthesis method. Without requiring any prior knowledge, the approach dynamically guides the image translation process using exemplar images to generate diverse and controllable styles in the target domain. A contrastive learningโ€“trained style encoder extracts pure style representations, which are integrated into an example-guided framework to effectively disentangle content and style. Experiments on the CrossMoDA 2023 dataset demonstrate that the proposed method significantly enhances the diversity of synthesized data and substantially improves performance on downstream segmentation tasks.

Technology Category

Application Category

๐Ÿ“ Abstract
Image-level domain alignment is the de facto approach for unsupervised domain adaptation, where unpaired image translation is used to minimize the domain gap. Prior studies mainly focus on the domain shift between the source and target domains, whereas the intra-domain variability remains under-explored. To address the latter, an effective strategy is to diversify the styles of the synthetic target domain data during image translation. However, previous methods typically require intra-domain variations to be pre-specified for style synthesis, which may be impractical. In this paper, we propose an exemplar-based style synthesis method named IntraStyler, which can capture diverse intra-domain styles without any prior knowledge. Specifically, IntraStyler uses an exemplar image to guide the style synthesis such that the output style matches the exemplar style. To extract the style-only features, we introduce a style encoder to learn styles discriminatively based on contrastive learning. We evaluate the proposed method on the largest public dataset for cross-modality domain adaptation, CrossMoDA 2023. Our experiments show the efficacy of our method in controllable style synthesis and the benefits of diverse synthetic data for downstream segmentation. Code is available at https://github.com/han-liu/IntraStyler.
Problem

Research questions and friction points this paper is trying to address.

intra-domain variability
style synthesis
cross-modality domain adaptation
unsupervised domain adaptation
image translation
Innovation

Methods, ideas, or system contributions that make the work stand out.

exemplar-based style synthesis
intra-domain variability
contrastive learning
unpaired image translation
cross-modality domain adaptation
๐Ÿ”Ž Similar Papers
No similar papers found.