🤖 AI Summary
Addressing the challenges of modeling high-order interactions in multimodal data and limited alignment/generation performance under missing modalities, this paper proposes a cross-modal alignment framework based on Variational Vine Copulas. The method models unimodal representations as Gaussian mixture distributions and employs vine copulas to explicitly decouple marginal distributions from pairwise and higher-order dependencies, enabling scalable joint distribution learning via variational inference. Its core innovation lies in the first integration of vine copulas into multimodal representation learning—supporting nonlinear dependency modeling, accurate inference, and generation under arbitrary modality missingness. Evaluated on the MIMIC-III dataset, the approach significantly outperforms state-of-the-art baselines across cross-modal alignment, imputation, and downstream prediction tasks. The implementation is publicly available.
📝 Abstract
Various data modalities are common in real-world applications (e.g., electronic health records, medical images and clinical notes in healthcare). It is essential to develop multimodal learning methods to aggregate various information from multiple modalities. The main challenge is how to appropriately align and fuse the representations of different modalities into a joint distribution. Existing methods mainly rely on concatenation or the Kronecker product, oversimplifying the interaction structure between modalities and indicating a need to model more complex interactions. Additionally, the joint distribution of latent representations with higher-order interactions is underexplored. Copula is a powerful statistical structure for modelling the interactions among variables, as it naturally bridges the joint distribution and marginal distributions of multiple variables. We propose a novel copula-driven multimodal learning framework, which focuses on learning the joint distribution of various modalities to capture the complex interactions among them. The key idea is to interpret the copula model as a tool to align the marginal distributions of the modalities efficiently. By assuming a Gaussian mixture distribution for each modality and a copula model on the joint distribution, our model can generate accurate representations for missing modalities. Extensive experiments on public MIMIC datasets demonstrate the superior performance of our model over other competitors. The code is available at https://github.com/HKU-MedAI/CMCM.