π€ AI Summary
Multimodal models suffer significant performance degradation under modality missing, primarily due to inconsistent representation learning between complete and incomplete inputs. To address this, we propose PROMISEβa novel framework featuring three key innovations: (1) a modality-aware prompt attention mechanism that dynamically generates robust representations conditioned on missing modalities; (2) hierarchical contrastive learning to enforce consistent cross-modal and cross-completeness representation alignment; and (3) a scalable multimodal prompt learning architecture supporting arbitrary modality subsets as input. Evaluated on multiple benchmark datasets, PROMISE consistently outperforms state-of-the-art methods. Ablation studies confirm the effectiveness of each component, demonstrating strong generalization and robustness across diverse missing-modality scenarios.
π Abstract
Multimodal models integrating natural language and visual information have substantially improved generalization of representation models. However, their effectiveness significantly declines in real-world situations where certain modalities are missing or unavailable. This degradation primarily stems from inconsistent representation learning between complete multimodal data and incomplete modality scenarios. Existing approaches typically address missing modalities through relatively simplistic generation methods, yet these approaches fail to adequately preserve cross-modal consistency, leading to suboptimal performance. To overcome this limitation, we propose a novel multimodal framework named PROMISE, a PROMpting-Attentive HIerarchical ContraStive LEarning approach designed explicitly for robust cross-modal representation under conditions of missing modalities. Specifically, PROMISE innovatively incorporates multimodal prompt learning into a hierarchical contrastive learning framework, equipped with a specially designed prompt-attention mechanism. This mechanism dynamically generates robust and consistent representations for scenarios where particular modalities are absent, thereby effectively bridging the representational gap between complete and incomplete data. Extensive experiments conducted on benchmark datasets, along with comprehensive ablation studies, clearly demonstrate the superior performance of PROMISE compared to current state-of-the-art multimodal methods.