Bridging Modalities via Progressive Re-alignment for Multimodal Test-Time Adaptation

📅 2025-11-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Cross-modal test-time adaptation (TTA) faces the coupled challenge of unimodal feature shift and cross-modal semantic misalignment, hindering the extension of existing TTA methods to multimodal settings. To address this, we propose BriMPR, a progressive realignment framework that— for the first time—decouples multimodal TTA into two sequential stages: (1) prompt-tuning-based unimodal feature distribution calibration, and (2) cross-modal semantic alignment via masked modality-combined pseudo-labeling and instance-level contrastive learning. This design effectively mitigates asymmetric distribution shifts across modalities and strengthens robust cross-modal interaction. Evaluated on multiple multimodal TTA benchmarks, BriMPR consistently outperforms state-of-the-art methods, demonstrating both the efficacy and generalizability of the proposed progressive decoupling strategy.

Technology Category

Application Category

📝 Abstract
Test-time adaptation (TTA) enables online model adaptation using only unlabeled test data, aiming to bridge the gap between source and target distributions. However, in multimodal scenarios, varying degrees of distribution shift across different modalities give rise to a complex coupling effect of unimodal shallow feature shift and cross-modal high-level semantic misalignment, posing a major obstacle to extending existing TTA methods to the multimodal field. To address this challenge, we propose a novel multimodal test-time adaptation (MMTTA) framework, termed as Bridging Modalities via Progressive Re-alignment (BriMPR). BriMPR, consisting of two progressively enhanced modules, tackles the coupling effect with a divide-and-conquer strategy. Specifically, we first decompose MMTTA into multiple unimodal feature alignment sub-problems. By leveraging the strong function approximation ability of prompt tuning, we calibrate the unimodal global feature distributions to their respective source distributions, so as to achieve the initial semantic re-alignment across modalities. Subsequently, we assign the credible pseudo-labels to combinations of masked and complete modalities, and introduce inter-modal instance-wise contrastive learning to further enhance the information interaction among modalities and refine the alignment. Extensive experiments on MMTTA tasks, including both corruption-based and real-world domain shift benchmarks, demonstrate the superiority of our method. Our source code is available at [this URL](https://github.com/Luchicken/BriMPR).
Problem

Research questions and friction points this paper is trying to address.

Address multimodal distribution shifts in test-time adaptation scenarios
Resolve coupling of unimodal feature shifts and cross-modal misalignment
Enable effective adaptation using only unlabeled multimodal test data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Progressive re-alignment for multimodal test-time adaptation
Prompt tuning for unimodal feature distribution calibration
Contrastive learning with pseudo-labels for cross-modal alignment
🔎 Similar Papers
No similar papers found.