๐ค AI Summary
Single-domain generalization (SDG) aims to enhance model generalization to unseen target domains using data from only one source domainโposing a fundamental challenge due to severely limited training distribution and absence of target-domain priors. To address this, we propose the first adversarial memory-based SDG framework: it constructs an updateable memory bank in the latent space, explicitly augments intra-domain diversity via adversarial feature generation, and achieves implicit cross-domain alignment through invariant subspace projection coupled with feature mapping alignment. Crucially, our method operates without access to target-domain data or domain labels, effectively mitigating single-domain bias. Extensive experiments on multiple standard SDG benchmarks demonstrate substantial improvements over existing approaches, achieving state-of-the-art performance. These results validate the efficacy of synergistically integrating memory enhancement and adversarial generation for modeling domain-invariant representations.
๐ Abstract
Domain Generalization (DG) aims to train models that can generalize to unseen testing domains by leveraging data from multiple training domains. However, traditional DG methods rely on the availability of multiple diverse training domains, limiting their applicability in data-constrained scenarios. Single Domain Generalization (SDG) addresses the more realistic and challenging setting by restricting the training data to a single domain distribution. The main challenges in SDG stem from the limited diversity of training data and the inaccessibility of unseen testing data distributions. To tackle these challenges, we propose a single domain generalization method that leverages an adversarial memory bank to augment training features. Our memory-based feature augmentation network maps both training and testing features into an invariant subspace spanned by diverse memory features, implicitly aligning the training and testing domains in the projected space. To maintain a diverse and representative feature memory bank, we introduce an adversarial feature generation method that creates features extending beyond the training domain distribution. Experimental results demonstrate that our approach achieves state-of-the-art performance on standard single domain generalization benchmarks.