🤖 AI Summary
This paper addresses the significant performance degradation of models on unseen target domains in cross-domain generalization. We propose CASA, a two-stage context-aware self-adaptation framework. During meta-training, CASA simulates multi-source domain shifts to learn domain-invariant representations. At test time, it leverages lightweight batch-level contextual statistics—such as batch feature means—to dynamically adapt normalization parameters and model outputs, enabling implicit, label-free domain adaptation. Our key innovations are: (i) the first formulation of batch statistics as implicit domain context, enabling context-driven adaptation; and (ii) the integration of meta-generalization simulation with multi-source ensemble inference. Extensive experiments on standard domain generalization benchmarks demonstrate state-of-the-art performance, achieving substantial gains in generalization accuracy and robustness across unseen target domains.
📝 Abstract
Domain generalization aims at developing suitable learning algorithms in source training domains such that the model learned can generalize well on a different unseen testing domain. We present a novel two-stage approach called Context-Aware Self-Adaptation (CASA) for domain generalization. CASA simulates an approximate meta-generalization scenario and incorporates a self-adaptation module to adjust pre-trained meta source models to the meta-target domains while maintaining their predictive capability on the meta-source domains. The core concept of self-adaptation involves leveraging contextual information, such as the mean of mini-batch features, as domain knowledge to automatically adapt a model trained in the first stage to new contexts in the second stage. Lastly, we utilize an ensemble of multiple meta-source models to perform inference on the testing domain. Experimental results demonstrate that our proposed method achieves state-of-the-art performance on standard benchmarks.