ShaLa: Multimodal Shared Latent Space Modelling

📅 2025-08-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing multimodal variational autoencoders (VAEs) struggle to model expressive joint variational posteriors, resulting in impoverished shared latent representations, suboptimal generation quality, and poor modality scalability. To address these limitations, this paper proposes a unified generative framework that synergistically integrates variational autoencoding with diffusion modeling. First, we introduce a novel cross-modal inference architecture to enhance the expressiveness of the joint posterior. Second, we design a two-stage diffusion prior that explicitly captures high-order semantic structures within the shared latent space. Our approach significantly improves both multimodal synthesis fidelity and cross-modal consistency. Extensive experiments demonstrate state-of-the-art performance across multiple benchmarks, including MIMIC-CXR, NTU-120, and UCF101. Moreover, the framework exhibits strong scalability to arbitrary modality combinations, enabling flexible and robust multimodal generation without architectural re-design.

Technology Category

Application Category

📝 Abstract
This paper presents a novel generative framework for learning shared latent representations across multimodal data. Many advanced multimodal methods focus on capturing all combinations of modality-specific details across inputs, which can inadvertently obscure the high-level semantic concepts that are shared across modalities. Notably, Multimodal VAEs with low-dimensional latent variables are designed to capture shared representations, enabling various tasks such as joint multimodal synthesis and cross-modal inference. However, multimodal VAEs often struggle to design expressive joint variational posteriors and suffer from low-quality synthesis. In this work, ShaLa addresses these challenges by integrating a novel architectural inference model and a second-stage expressive diffusion prior, which not only facilitates effective inference of shared latent representation but also significantly improves the quality of downstream multimodal synthesis. We validate ShaLa extensively across multiple benchmarks, demonstrating superior coherence and synthesis quality compared to state-of-the-art multimodal VAEs. Furthermore, ShaLa scales to many more modalities while prior multimodal VAEs have fallen short in capturing the increasing complexity of the shared latent space.
Problem

Research questions and friction points this paper is trying to address.

Learning shared latent representations across multimodal data
Improving quality of multimodal synthesis and inference
Scaling to many modalities with complex shared spaces
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates architectural inference model
Adds second-stage expressive diffusion prior
Improves shared latent representation inference