PROMISE: Prompt-Attentive Hierarchical Contrastive Learning for Robust Cross-Modal Representation with Missing Modalities

πŸ“… 2025-11-14
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Multimodal models suffer significant performance degradation under modality missing, primarily due to inconsistent representation learning between complete and incomplete inputs. To address this, we propose PROMISEβ€”a novel framework featuring three key innovations: (1) a modality-aware prompt attention mechanism that dynamically generates robust representations conditioned on missing modalities; (2) hierarchical contrastive learning to enforce consistent cross-modal and cross-completeness representation alignment; and (3) a scalable multimodal prompt learning architecture supporting arbitrary modality subsets as input. Evaluated on multiple benchmark datasets, PROMISE consistently outperforms state-of-the-art methods. Ablation studies confirm the effectiveness of each component, demonstrating strong generalization and robustness across diverse missing-modality scenarios.

Technology Category

Application Category

πŸ“ Abstract
Multimodal models integrating natural language and visual information have substantially improved generalization of representation models. However, their effectiveness significantly declines in real-world situations where certain modalities are missing or unavailable. This degradation primarily stems from inconsistent representation learning between complete multimodal data and incomplete modality scenarios. Existing approaches typically address missing modalities through relatively simplistic generation methods, yet these approaches fail to adequately preserve cross-modal consistency, leading to suboptimal performance. To overcome this limitation, we propose a novel multimodal framework named PROMISE, a PROMpting-Attentive HIerarchical ContraStive LEarning approach designed explicitly for robust cross-modal representation under conditions of missing modalities. Specifically, PROMISE innovatively incorporates multimodal prompt learning into a hierarchical contrastive learning framework, equipped with a specially designed prompt-attention mechanism. This mechanism dynamically generates robust and consistent representations for scenarios where particular modalities are absent, thereby effectively bridging the representational gap between complete and incomplete data. Extensive experiments conducted on benchmark datasets, along with comprehensive ablation studies, clearly demonstrate the superior performance of PROMISE compared to current state-of-the-art multimodal methods.
Problem

Research questions and friction points this paper is trying to address.

Addresses performance degradation with missing modalities in multimodal models
Enhances cross-modal consistency through hierarchical contrastive learning
Dynamically generates robust representations for incomplete modality scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchical contrastive learning for robust cross-modal representation
Multimodal prompt learning with attention mechanism
Dynamic generation of consistent representations for missing modalities
πŸ”Ž Similar Papers
No similar papers found.
J
Jiajun Chen
Beijing University of Posts and Telecommunications
S
Sai Cheng
Beijing University of Posts and Telecommunications
Y
Yutao Yuan
Beijing University of Posts and Telecommunications
Y
Yirui Zhang
Beijing University of Posts and Telecommunications
Haitao Yuan
Haitao Yuan
New Jersey Institute of Technology, NJ, USA, and Beihang University, Beijing, China
Deep LearningData-driven OptimizationComputational IntelligenceIntelligent DecisionsIoTs
P
Peng Peng
Tsinghua University
Y
Yi Zhong
Beijing Institute of Technology