🤖 AI Summary
To address unsupervised domain adaptation (UDA) under large distribution shifts between source and target domains, this paper proposes a Style-Aware Self-Intermediate Domain (SSID) mechanism. SSID decouples and randomly fuses object and style features to synthesize an auxiliary intermediate domain equipped with pseudo-labels, effectively bridging the domain gap. An external memory bank is introduced to stably store class-level object and style representations, and a joint cross-domain–in-domain loss is designed. Theoretical analysis guarantees loss convergence under infinite sampling. The method operates without target-domain annotations, is plug-and-play compatible with diverse backbone architectures, and achieves significant performance gains on mainstream UDA benchmarks, demonstrating strong generalization. Its core innovation lies in the first integration of style-aware representation learning with self-constructed intermediate domain generation—inspired by human transitive reasoning—to simultaneously preserve discriminability and enable effective knowledge transfer.
📝 Abstract
Unsupervised domain adaptation (UDA) has attracted considerable attention, which transfers knowledge from a label-rich source domain to a related but unlabeled target domain. Reducing inter-domain differences has always been a crucial factor to improve performance in UDA, especially for tasks where there is a large gap between source and target domains. To this end, we propose a novel style-aware feature fusion method (SAFF) to bridge the large domain gap and transfer knowledge while alleviating the loss of class-discriminative information. Inspired by the human transitive inference and learning ability, a novel style-aware self-intermediate domain (SSID) is investigated to link two seemingly unrelated concepts through a series of intermediate auxiliary synthesized concepts. Specifically, we propose a novel learning strategy of SSID, which selects samples from both source and target domains as anchors, and then randomly fuses the object and style features of these anchors to generate labeled and style-rich intermediate auxiliary features for knowledge transfer. Moreover, we design an external memory bank to store and update specified labeled features to obtain stable class features and class-wise style features. Based on the proposed memory bank, the intra- and inter-domain loss functions are designed to improve the class recognition ability and feature compatibility, respectively. Meanwhile, we simulate the rich latent feature space of SSID by infinite sampling and the convergence of the loss function by mathematical theory. Finally, we conduct comprehensive experiments on commonly used domain adaptive benchmarks to evaluate the proposed SAFF, and the experimental results show that the proposed SAFF can be easily combined with different backbone networks and obtain better performance as a plug-in-plug-out module.