Attention Bootstrapping for Multi-Modal Test-Time Adaptation

๐Ÿ“… 2025-03-04
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address modality misalignment degradation under test-time distribution shifts in multimodal learning, this paper proposes a label-free test-time adaptation framework. Methodologically, it introduces (1) Attention Bootstrappingโ€”a novel self-supervised mechanism that jointly optimizes inter-modal attention distributions via attention-guided signals; (2) Principal Component Entropy Minimization, which explicitly enforces cross-modal feature space consistency to mitigate attention mismatch; and (3) joint modeling of self-attention and cross-attention to suppress gradient noise during adaptation. Evaluated on multiple standard multimodal benchmarks, the method significantly outperforms existing test-time adaptation approaches. It demonstrates enhanced robustness and generalization in cross-modal fusion under diverse distribution shifts, achieving state-of-the-art performance without requiring labeled target data or architectural modifications to the pretrained model.

Technology Category

Application Category

๐Ÿ“ Abstract
Test-time adaptation aims to adapt a well-trained model to potential distribution shifts at test time using only unlabeled test data, without access to the original training data. While previous efforts mainly focus on a single modality, test-time distribution shift in the multi-modal setting is more complex and calls for new solutions. This paper tackles the problem of multi-modal test-time adaptation by proposing a novel method named Attention Bootstrapping with Principal Entropy Minimization (ABPEM). We observe that test-time distribution shift causes misalignment across modalities, leading to a large gap between intra-modality discrepancies (measured by self-attention) and inter-modality discrepancies (measured by cross-attention). We name this the attention gap. This attention gap widens with more severe distribution shifts, hindering effective modality fusion. To mitigate this attention gap and encourage better modality fusion, we propose attention bootstrapping that promotes cross-attention with the guidance of self-attention. Moreover, to reduce the gradient noise in the commonly-used entropy minimization, we adopt principal entropy minimization, a refinement of entropy minimization that reduces gradient noise by focusing on the principal parts of entropy, excluding less reliable gradient information. Extensive experiments on the benchmarks validate the effectiveness of the proposed ABPEM in comparison with competing baselines.
Problem

Research questions and friction points this paper is trying to address.

Addresses multi-modal test-time adaptation challenges.
Proposes Attention Bootstrapping with Principal Entropy Minimization (ABPEM).
Reduces attention gap and improves modality fusion.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Attention Bootstrapping for multi-modal adaptation
Principal Entropy Minimization reduces gradient noise
ABPEM improves modality fusion via cross-attention guidance
๐Ÿ”Ž Similar Papers
No similar papers found.
Y
Yusheng Zhao
State Key Laboratory for Multimedia Information Processing, School of Computer Science, PKU-Anker LLM Lab, Peking University, Beijing, China
J
Junyu Luo
State Key Laboratory for Multimedia Information Processing, School of Computer Science, PKU-Anker LLM Lab, Peking University, Beijing, China
X
Xiao Luo
Department of Computer Science, University of California, Los Angeles, CA, USA
Jinsheng Huang
Jinsheng Huang
Peking University
Multimodal LearningFintech
Jingyang Yuan
Jingyang Yuan
Peking University
LLMAI for Science
Zhiping Xiao
Zhiping Xiao
Postdoc at University of Washington
CSEDMML
M
Ming Zhang
State Key Laboratory for Multimedia Information Processing, School of Computer Science, PKU-Anker LLM Lab, Peking University, Beijing, China