🤖 AI Summary
This work addresses the challenge of test-time adaptation under dynamic distribution shifts without access to source data, where existing methods struggle to balance efficiency and generalization. The authors establish for the first time the existence of a “golden subspace”—a minimal yet sufficient feature subspace that preserves model performance—and demonstrate its alignment with the row space of the pre-trained classifier. Building on this insight, they propose GOLD, a method that leverages the sample-wise averaged gradient outer product (AGOP) as an efficient proxy for classifier weights, designs a lightweight adapter to project features into this subspace, and learns a compact scaling vector to dynamically estimate the subspace without retraining. Experiments across multiple benchmarks—including image classification, semantic segmentation, and autonomous driving—show that GOLD significantly outperforms prior approaches in inference efficiency, stability, and overall accuracy.
📝 Abstract
Continual Test-Time Adaptation (CTTA) aims to enable models to adapt online to unlabeled data streams under distribution shift without accessing source data. Existing CTTA methods face an efficiency-generalization trade-off: updating more parameters improves adaptation but severely reduces online inference efficiency. An ideal solution is to achieve comparable adaptation with minimal feature updates; we call this minimal subspace the golden subspace. We prove its existence in a single-step adaptation setting and show that it coincides with the row space of the pretrained classifier. To enable online maintenance of this subspace, we introduce the sample-wise Average Gradient Outer Product (AGOP) as an efficient proxy for estimating the classifier weights without retraining. Building on these insights, we propose Guided Online Low-rank Directional adaptation (GOLD), which uses a lightweight adapter to project features onto the golden subspace and learns a compact scaling vector while the subspace is dynamically updated via AGOP. Extensive experiments on classification and segmentation benchmarks, including autonomous-driving scenarios, demonstrate that GOLD attains superior efficiency, stability, and overall performance. Our code is available at https://github.com/AIGNLAI/GOLD.