Multimodal Variational Autoencoder: a Barycentric View

📅 2024-12-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Modeling both shared and modality-specific representations under multimodal missingness remains challenging. Method: We propose the Barycentric Variational Autoencoder (BC-VAE), which formalizes multimodal inference as a generalized barycenter optimization problem. Leveraging the Wasserstein distance to capture geometric structure between modality-wise distributions, BC-VAE jointly minimizes KL divergence and the 2-Wasserstein distance—enabling concurrent learning of modality-invariant and modality-specific latent representations. Unlike conventional Product-of-Experts (PoE) or Mixture-of-Experts (MoE) approaches, BC-VAE inherently preserves cross-modal geometric consistency without explicit distribution alignment. Contribution/Results: Extensive experiments on three standard multimodal benchmarks demonstrate that BC-VAE significantly improves generative fidelity and representation disentanglement under missing-modality conditions.

Technology Category

Application Category

📝 Abstract
Multiple signal modalities, such as vision and sounds, are naturally present in real-world phenomena. Recently, there has been growing interest in learning generative models, in particular variational autoencoder (VAE), to for multimodal representation learning especially in the case of missing modalities. The primary goal of these models is to learn a modality-invariant and modality-specific representation that characterizes information across multiple modalities. Previous attempts at multimodal VAEs approach this mainly through the lens of experts, aggregating unimodal inference distributions with a product of experts (PoE), a mixture of experts (MoE), or a combination of both. In this paper, we provide an alternative generic and theoretical formulation of multimodal VAE through the lens of barycenter. We first show that PoE and MoE are specific instances of barycenters, derived by minimizing the asymmetric weighted KL divergence to unimodal inference distributions. Our novel formulation extends these two barycenters to a more flexible choice by considering different types of divergences. In particular, we explore the Wasserstein barycenter defined by the 2-Wasserstein distance, which better preserves the geometry of unimodal distributions by capturing both modality-specific and modality-invariant representations compared to KL divergence. Empirical studies on three multimodal benchmarks demonstrated the effectiveness of the proposed method.
Problem

Research questions and friction points this paper is trying to address.

Multimodal Learning
Cross-modal Correlation
Missing Modality Imputation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Wasserstein Barycenter
Multimodal Variational Autoencoder
Inter-modal Correlation Learning
🔎 Similar Papers
No similar papers found.