π€ AI Summary
This work addresses the limitation of single-pool memory in test-time adaptation (TTA) for modeling the multimodal distribution of test streams. To this end, we propose a Multi-Cluster Memory (MCM) framework that explicitly uncovers the intrinsic multi-cluster structure of test data. MCM introduces a lightweight pixel-level statistical descriptor and integrates Gaussian mixture modelβbased diagnosis, dynamic cluster merging, and uniform cluster retrieval to enable structured memory management. The framework is designed to seamlessly augment existing TTA methods without architectural modifications. Extensive experiments on CIFAR-10/100-C, ImageNet-C, and DomainNet demonstrate consistent and substantial performance gains, with average improvements up to 12.13%, and the benefits further increase as the complexity of distribution shifts grows.
π Abstract
Test-time adaptation (TTA) adapts pre-trained models to distribution shifts at inference using only unlabeled test data. Under the Practical TTA (PTTA) setting, where test streams are temporally correlated and non-i.i.d., memory has become an indispensable component for stable adaptation, yet existing methods universally store amples in a single unstructured pool. We show that this single-cluster design is fundamentally mismatched to PTTA: a stream clusterability analysis reveals that test streams are inherently multi-modal, with the optimal number of mixture components consistently far exceeding one. To close this structural gap, we propose Multi-Cluster Memory (MCM), a plug-and-play framework that organizes stored samples into multiple clusters using lightweight pixel-level statistical descriptors. MCM introduces three complementary mechanisms: descriptor-based cluster assignment to capture distinct distributional modes, Adjacent Cluster Consolidation (ACC) to bound memory usage by merging the most similar temporally adjacent clusters, and Uniform Cluster Retrieval (UCR) to ensure balanced supervision across all modes during adaptation. Integrated with three contemporary TTA methods on CIFAR-10-C, CIFAR-100-C, ImageNet-C, and DomainNet, MCM achieves consistent improvements across all 12 configurations, with gains up to 5.00% on ImageNet-C and 12.13% on DomainNet. Notably, these gains scale with distributional complexity: larger label spaces with greater multi-modality benefit most from multi-cluster organization. GMM-based memory diagnostics further confirm that MCM maintains near-optimal distributional balance, entropy, and mode coverage, whereas single-cluster memory exhibits persistent imbalance and progressive mode loss. These results establish memory organization as a key design axis for practical test-time adaptation.