๐ค AI Summary
In continual test-time adaptation, entropy minimization often induces model collapseโwhere all samples are predicted as the same class. To address this, we propose Ranking Entropy Minimization (REM), a novel method that preserves the entropy ordering across predictions on samples of varying difficulty via a hierarchical progressive masking mechanism. REM is the first entropy-based approach to enable stable, scalable optimization in continuous unsupervised online adaptation. It requires neither source-domain data nor labels, and synergistically integrates entropy minimization, probabilistic distribution alignment, and hierarchical masking. Evaluated on multiple benchmarks, REM significantly mitigates single-class collapse, yielding consistent improvements in accuracy and robustness under distribution shifts. The implementation is publicly available.
๐ Abstract
Test-time adaptation aims to adapt to realistic environments in an online manner by learning during test time. Entropy minimization has emerged as a principal strategy for test-time adaptation due to its efficiency and adaptability. Nevertheless, it remains underexplored in continual test-time adaptation, where stability is more important. We observe that the entropy minimization method often suffers from model collapse, where the model converges to predicting a single class for all images due to a trivial solution. We propose ranked entropy minimization to mitigate the stability problem of the entropy minimization method and extend its applicability to continuous scenarios. Our approach explicitly structures the prediction difficulty through a progressive masking strategy. Specifically, it gradually aligns the model's probability distributions across different levels of prediction difficulty while preserving the rank order of entropy. The proposed method is extensively evaluated across various benchmarks, demonstrating its effectiveness through empirical results. Our code is available at https://github.com/pilsHan/rem