🤖 AI Summary
This work addresses the degradation of generalization performance in medical multimodal representation learning caused by systematic biases. To mitigate this issue, the authors propose a model-agnostic dual-stream feature decorrelation framework that integrates causal inference with mutual information minimization. By leveraging a structural causal model, the method identifies and disentangles spurious correlations induced by latent confounders, thereby effectively extracting and fusing causally invariant features. Extensive experiments on multiple medical datasets—including MIMIC-IV, eICU, and ADNI—demonstrate significant performance improvements, enhanced robustness, and superior generalization capability. Notably, this study presents the first integration of causal inference and mutual information minimization for debiasing in medical multimodal learning.
📝 Abstract
Medical multimodal representation learning aims to integrate heterogeneous data into unified patient representations to support clinical outcome prediction. However, real-world medical datasets commonly contain systematic biases from multiple sources, which poses significant challenges for medical multimodal representation learning. Existing approaches typically focus on effective multimodal fusion, neglecting inherent biased features that affect the generalization ability. To address these challenges, we propose a Dual-Stream Feature Decorrelation Framework that identifies and handles the biases through structural causal analysis introduced by latent confounders. Our method employs a causal-biased decorrelation framework with dual-stream neural networks to disentangle causal features from spurious correlations, utilizing generalized cross-entropy loss and mutual information minimization for effective decorrelation. The framework is model-agnostic and can be integrated into existing medical multimodal learning methods. Comprehensive experiments on MIMIC-IV, eICU, and ADNI datasets demonstrate consistent performance improvements.