Enhancing Foundation VLM Robustness to Missing Modality: Scalable Diffusion for Bi-directional Feature Restoration

📅 2026-02-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the significant performance degradation of vision-language models (VLMs) under missing input modalities, a challenge inadequately tackled by existing approaches that struggle to balance semantic recovery and generalization. The authors propose a plug-and-play intermediate diffusion training module featuring a dynamic modality gating mechanism that adaptively guides the generation of semantically consistent features. Additionally, cross-modal mutual learning is introduced to achieve bidirectional alignment between the semantic spaces of dual encoders. This framework effectively enables bidirectional reconstruction of visual and textual features, demonstrating superior zero-shot performance across multiple benchmark datasets. It exhibits strong robustness and scalability under varying modality missing rates and environmental conditions, consistently outperforming current state-of-the-art methods.

Technology Category

Application Category

📝 Abstract
Vision Language Models (VLMs) typically assume complete modality input during inference. However, their effectiveness drops sharply when certain modalities are unavailable or incomplete. Current research primarily faces two dilemmas: Prompt-based methods struggle to restore missing yet indispensable features and impair generalization of VLMs. Imputation-based approaches, lacking effective guidance, are prone to generating semantically irrelevant noise. Restoring precise semantics while sustaining VLM generalization remains challenging. Therefore, we propose a general missing modality restoration strategy in this paper. We introduce an enhanced diffusion model as a pluggable mid-stage training module to effectively restore missing features. Our strategy introduces two key innovations: (I) Dynamic Modality Gating, which adaptively leverages conditional features to steer the generation of semantically consistent features; (II) Cross-Modal Mutual Learning mechanism, which bridges the semantic spaces of dual encoders to achieve bidirectional alignment. Zero-shot evaluations across benchmark datasets demonstrate that our approach outperforms existing baseline methods. Extensive experiments and ablation studies confirm our model as a robust and scalable extension for VLMs in missing modality scenarios, ensuring reliability across diverse missing rates and environments. Our code and models will be publicly available.
Problem

Research questions and friction points this paper is trying to address.

missing modality
vision language models
feature restoration
robustness
semantic consistency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Diffusion Model
Missing Modality Restoration
Dynamic Modality Gating
Cross-Modal Mutual Learning
Vision-Language Models
🔎 Similar Papers
No similar papers found.
W
Wei Dai
Xi'an Jiaotong University
H
Haoyu Wang
Xi'an Jiaotong University
H
Honghao Chang
School of Information and Communications Engineering, Xi'an Jiaotong University
Lijun He
Lijun He
General Electric Global Research Center
F
Fan Li
School of Information and Communications Engineering, Xi'an Jiaotong University
Jian Sun
Jian Sun
Professor at Xi'an Jiaotong University
Applied MathematicsComputer VisionMachine learningMedical Image Analysis
H
Haixia Bi
School of Information and Communications Engineering, Xi'an Jiaotong University