SEVA: Leveraging Single-Step Ensemble of Vicinal Augmentations for Test-Time Adaptation

📅 2025-05-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address low utilization of reliable samples and high computational overhead from multiple augmentation rounds in test-time adaptation (TTA) under distribution shift, this paper proposes Single-step Ensemble via Neighborhood Augmentation (SEVA). SEVA establishes the first theoretical framework unifying the effect of multi-round neighborhood augmentation as an upper-bound optimization problem on entropy loss. It introduces a confidence-driven dynamic sample filtering mechanism to automatically discard ambiguous samples, and incorporates a gradient-free lightweight update strategy that achieves equivalent multi-round augmentation performance with zero additional inference cost. Extensive experiments across diverse network architectures and challenging distribution shift scenarios demonstrate that SEVA consistently outperforms state-of-the-art TTA methods, achieving superior accuracy, real-time inference capability, and strong generalization across models and tasks.

Technology Category

Application Category

📝 Abstract
Test-Time adaptation (TTA) aims to enhance model robustness against distribution shifts through rapid model adaptation during inference. While existing TTA methods often rely on entropy-based unsupervised training and achieve promising results, the common practice of a single round of entropy training is typically unable to adequately utilize reliable samples, hindering adaptation efficiency. In this paper, we discover augmentation strategies can effectively unleash the potential of reliable samples, but the rapidly growing computational cost impedes their real-time application. To address this limitation, we propose a novel TTA approach named Single-step Ensemble of Vicinal Augmentations (SEVA), which can take advantage of data augmentations without increasing the computational burden. Specifically, instead of explicitly utilizing the augmentation strategy to generate new data, SEVA develops a theoretical framework to explore the impacts of multiple augmentations on model adaptation and proposes to optimize an upper bound of the entropy loss to integrate the effects of multiple rounds of augmentation training into a single step. Furthermore, we discover and verify that using the upper bound as the loss is more conducive to the selection mechanism, as it can effectively filter out harmful samples that confuse the model. Combining these two key advantages, the proposed efficient loss and a complementary selection strategy can simultaneously boost the potential of reliable samples and meet the stringent time requirements of TTA. The comprehensive experiments on various network architectures across challenging testing scenarios demonstrate impressive performances and the broad adaptability of SEVA. The code will be publicly available.
Problem

Research questions and friction points this paper is trying to address.

Enhancing model robustness against distribution shifts during inference
Improving adaptation efficiency by utilizing reliable samples effectively
Reducing computational cost while leveraging multiple augmentations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Single-step ensemble of vicinal augmentations for TTA
Optimizes entropy loss upper bound for efficiency
Filters harmful samples with complementary selection strategy
🔎 Similar Papers
No similar papers found.
Z
Zixuan Hu
School of Computer Science, Peking University, Beijing, China
Yichun Hu
Yichun Hu
Cornell University
L
Ling-Yu Duan
School of Computer Science, Peking University, Beijing, China