Unity by Diversity: Improved Representation Learning in Multimodal VAEs

📅 2024-03-08
🏛️ arXiv.org
📈 Citations: 1
Influential: 1
📄 PDF
🤖 AI Summary
Multimodal variational autoencoders (MVAEs) suffer from overly rigid cross-modal representation coupling, making it difficult to simultaneously ensure high-quality shared representations and modality-specific fidelity. Method: We propose a soft-constrained Mixture-of-Experts (Soft-MoE) prior that replaces hard parameter sharing with learnable gating weights, enabling flexible alignment of modality-specific latent distributions under a unified posterior. This decouples modality-invariant and modality-specific representations while preserving information integrity via variational inference and a soft alignment loss. Contribution/Results: Experiments on multiple benchmarks and real-world multimodal datasets demonstrate that our approach significantly outperforms existing shared-architecture MVAEs. It achieves state-of-the-art performance in both latent representation quality—measured by disentanglement and downstream task accuracy—and missing modality imputation accuracy.

Technology Category

Application Category

📝 Abstract
Variational Autoencoders for multimodal data hold promise for many tasks in data analysis, such as representation learning, conditional generation, and imputation. Current architectures either share the encoder output, decoder input, or both across modalities to learn a shared representation. Such architectures impose hard constraints on the model. In this work, we show that a better latent representation can be obtained by replacing these hard constraints with a soft constraint. We propose a new mixture-of-experts prior, softly guiding each modality's latent representation towards a shared aggregate posterior. This approach results in a superior latent representation and allows each encoding to preserve information better from its uncompressed original features. In extensive experiments on multiple benchmark datasets and two challenging real-world datasets, we show improved learned latent representations and imputation of missing data modalities compared to existing methods.
Problem

Research questions and friction points this paper is trying to address.

Variational Autoencoder
Multimodal Data Fusion
Missing Data Imputation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Variational Autoencoder
Multi-modal Data
Feature Quality Improvement
🔎 Similar Papers
No similar papers found.