๐ค AI Summary
To address modality misalignment degradation under test-time distribution shifts in multimodal learning, this paper proposes a label-free test-time adaptation framework. Methodologically, it introduces (1) Attention Bootstrappingโa novel self-supervised mechanism that jointly optimizes inter-modal attention distributions via attention-guided signals; (2) Principal Component Entropy Minimization, which explicitly enforces cross-modal feature space consistency to mitigate attention mismatch; and (3) joint modeling of self-attention and cross-attention to suppress gradient noise during adaptation. Evaluated on multiple standard multimodal benchmarks, the method significantly outperforms existing test-time adaptation approaches. It demonstrates enhanced robustness and generalization in cross-modal fusion under diverse distribution shifts, achieving state-of-the-art performance without requiring labeled target data or architectural modifications to the pretrained model.
๐ Abstract
Test-time adaptation aims to adapt a well-trained model to potential distribution shifts at test time using only unlabeled test data, without access to the original training data. While previous efforts mainly focus on a single modality, test-time distribution shift in the multi-modal setting is more complex and calls for new solutions. This paper tackles the problem of multi-modal test-time adaptation by proposing a novel method named Attention Bootstrapping with Principal Entropy Minimization (ABPEM). We observe that test-time distribution shift causes misalignment across modalities, leading to a large gap between intra-modality discrepancies (measured by self-attention) and inter-modality discrepancies (measured by cross-attention). We name this the attention gap. This attention gap widens with more severe distribution shifts, hindering effective modality fusion. To mitigate this attention gap and encourage better modality fusion, we propose attention bootstrapping that promotes cross-attention with the guidance of self-attention. Moreover, to reduce the gradient noise in the commonly-used entropy minimization, we adopt principal entropy minimization, a refinement of entropy minimization that reduces gradient noise by focusing on the principal parts of entropy, excluding less reliable gradient information. Extensive experiments on the benchmarks validate the effectiveness of the proposed ABPEM in comparison with competing baselines.