Ranked Entropy Minimization for Continual Test-Time Adaptation

๐Ÿ“… 2025-05-22
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
In continual test-time adaptation, entropy minimization often induces model collapseโ€”where all samples are predicted as the same class. To address this, we propose Ranking Entropy Minimization (REM), a novel method that preserves the entropy ordering across predictions on samples of varying difficulty via a hierarchical progressive masking mechanism. REM is the first entropy-based approach to enable stable, scalable optimization in continuous unsupervised online adaptation. It requires neither source-domain data nor labels, and synergistically integrates entropy minimization, probabilistic distribution alignment, and hierarchical masking. Evaluated on multiple benchmarks, REM significantly mitigates single-class collapse, yielding consistent improvements in accuracy and robustness under distribution shifts. The implementation is publicly available.

Technology Category

Application Category

๐Ÿ“ Abstract
Test-time adaptation aims to adapt to realistic environments in an online manner by learning during test time. Entropy minimization has emerged as a principal strategy for test-time adaptation due to its efficiency and adaptability. Nevertheless, it remains underexplored in continual test-time adaptation, where stability is more important. We observe that the entropy minimization method often suffers from model collapse, where the model converges to predicting a single class for all images due to a trivial solution. We propose ranked entropy minimization to mitigate the stability problem of the entropy minimization method and extend its applicability to continuous scenarios. Our approach explicitly structures the prediction difficulty through a progressive masking strategy. Specifically, it gradually aligns the model's probability distributions across different levels of prediction difficulty while preserving the rank order of entropy. The proposed method is extensively evaluated across various benchmarks, demonstrating its effectiveness through empirical results. Our code is available at https://github.com/pilsHan/rem
Problem

Research questions and friction points this paper is trying to address.

Mitigates model collapse in continual test-time adaptation
Extends entropy minimization to continuous scenarios
Structures prediction difficulty via progressive masking
Innovation

Methods, ideas, or system contributions that make the work stand out.

Ranked entropy minimization for stability
Progressive masking for structured prediction
Aligns probability distributions preserving entropy rank
๐Ÿ”Ž Similar Papers
No similar papers found.