Single Domain Generalization with Adversarial Memory

๐Ÿ“… 2025-03-08
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Single-domain generalization (SDG) aims to enhance model generalization to unseen target domains using data from only one source domainโ€”posing a fundamental challenge due to severely limited training distribution and absence of target-domain priors. To address this, we propose the first adversarial memory-based SDG framework: it constructs an updateable memory bank in the latent space, explicitly augments intra-domain diversity via adversarial feature generation, and achieves implicit cross-domain alignment through invariant subspace projection coupled with feature mapping alignment. Crucially, our method operates without access to target-domain data or domain labels, effectively mitigating single-domain bias. Extensive experiments on multiple standard SDG benchmarks demonstrate substantial improvements over existing approaches, achieving state-of-the-art performance. These results validate the efficacy of synergistically integrating memory enhancement and adversarial generation for modeling domain-invariant representations.

Technology Category

Application Category

๐Ÿ“ Abstract
Domain Generalization (DG) aims to train models that can generalize to unseen testing domains by leveraging data from multiple training domains. However, traditional DG methods rely on the availability of multiple diverse training domains, limiting their applicability in data-constrained scenarios. Single Domain Generalization (SDG) addresses the more realistic and challenging setting by restricting the training data to a single domain distribution. The main challenges in SDG stem from the limited diversity of training data and the inaccessibility of unseen testing data distributions. To tackle these challenges, we propose a single domain generalization method that leverages an adversarial memory bank to augment training features. Our memory-based feature augmentation network maps both training and testing features into an invariant subspace spanned by diverse memory features, implicitly aligning the training and testing domains in the projected space. To maintain a diverse and representative feature memory bank, we introduce an adversarial feature generation method that creates features extending beyond the training domain distribution. Experimental results demonstrate that our approach achieves state-of-the-art performance on standard single domain generalization benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Single Domain Generalization (SDG) with limited training data diversity.
Challenges in aligning unseen testing domains with single training domain.
Proposing adversarial memory bank for feature augmentation in SDG.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adversarial memory bank for feature augmentation
Invariant subspace mapping for domain alignment
Adversarial feature generation beyond training domain
๐Ÿ”Ž Similar Papers
No similar papers found.