🤖 AI Summary
Domain shift severely degrades deep learning model performance in unsupervised domain adaptation (UDA), while existing methods rely on fine-tuning feature extractors—resulting in low efficiency, poor interpretability, and limited scalability. This paper proposes a novel, efficient UDA paradigm that freezes a pretrained encoder and optimizes only the decision boundary. Leveraging the inherent domain-invariant geometric structure—namely, intra-class compactness and inter-class separability—induced by large-scale pretrained models in feature space, our approach performs offline feature extraction followed by a single-pass, full-dataset alignment of decision boundaries across domains. Crucially, no network parameters are updated during adaptation. Evaluated on multiple benchmarks, the method achieves or surpasses state-of-the-art performance while significantly reducing memory footprint and computational overhead. We further validate its generalizability across diverse scientific domains, including protein structure prediction, remote sensing classification, and seismic event detection.
📝 Abstract
Domain shift, characterized by degraded model performance during transition from labeled source domains to unlabeled target domains, poses a persistent challenge for deploying deep learning systems. Current unsupervised domain adaptation (UDA) methods predominantly rely on fine-tuning feature extractors - an approach limited by inefficiency, reduced interpretability, and poor scalability to modern architectures.
Our analysis reveals that models pretrained on large-scale data exhibit domain-invariant geometric patterns in their feature space, characterized by intra-class clustering and inter-class separation, thereby preserving transferable discriminative structures. These findings indicate that domain shifts primarily manifest as boundary misalignment rather than feature degradation.
Unlike fine-tuning entire pre-trained models - which risks introducing unpredictable feature distortions - we propose the Feature-space Planes Searcher (FPS): a novel domain adaptation framework that optimizes decision boundaries by leveraging these geometric patterns while keeping the feature encoder frozen. This streamlined approach enables interpretative analysis of adaptation while substantially reducing memory and computational costs through offline feature extraction, permitting full-dataset optimization in a single computation cycle.
Evaluations on public benchmarks demonstrate that FPS achieves competitive or superior performance to state-of-the-art methods. FPS scales efficiently with multimodal large models and shows versatility across diverse domains including protein structure prediction, remote sensing classification, and earthquake detection. We anticipate FPS will provide a simple, effective, and generalizable paradigm for transfer learning, particularly in domain adaptation tasks. .