Density-Informed VAE (DiVAE): Reliable Log-Prior Probability via Density Alignment Regularization

📅 2025-12-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Standard VAEs enforce latent variables to match simplistic priors (e.g., standard normal), neglecting the true data density structure. This causes misalignment between the log-prior and log-data-density, degrading distribution alignment, prior coverage, out-of-distribution (OOD) detection, and uncertainty calibration. To address this, we propose DiVAE: a variational autoencoder incorporating a lightweight, data-driven density alignment regularizer within the ELBO framework. This term explicitly aligns the log-prior in latent space with the log-density of input data, estimated via nonparametric or parametric density estimation. Concurrently, it encourages the learnable prior to concentrate around high-density data regions and adaptively allocates posterior mass according to local data density. Experiments on synthetic datasets and MNIST demonstrate that DiVAE significantly improves latent-space density consistency, prior coverage, and OOD detection performance—enhancing both model interpretability and reliability.

Technology Category

Application Category

📝 Abstract
We introduce Density-Informed VAE (DiVAE), a lightweight, data-driven regularizer that aligns the VAE log-prior probability $log p_Z(z)$ with a log-density estimated from data. Standard VAEs match latents to a simple prior, overlooking density structure in the data-space. DiVAE encourages the encoder to allocate posterior mass in proportion to data-space density and, when the prior is learnable, nudges the prior toward high-density regions. This is realized by adding a robust, precision-weighted penalty to the ELBO, incurring negligible computational overhead. On synthetic datasets, DiVAE (i) improves distributional alignment of latent log-densities to its ground truth counterpart, (ii) improves prior coverage, and (iii) yields better OOD uncertainty calibration. On MNIST, DiVAE improves alignment of the prior with external estimates of the density, providing better interpretability, and improves OOD detection for learnable priors.
Problem

Research questions and friction points this paper is trying to address.

Aligns VAE log-prior probability with data-estimated log-density
Improves latent distribution alignment and prior coverage
Enhances OOD uncertainty calibration and detection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Aligns VAE log-prior with data-estimated density
Adds lightweight penalty to ELBO for regularization
Improves latent alignment and out-of-distribution detection
🔎 Similar Papers
No similar papers found.
M
Michele Alessi
1Department of Mathematics, Informatics and Geosciences, University of Trieste, Italy
Alessio Ansuini
Alessio Ansuini
AREA Science Park, Research and Technology Institute
Computational NeuroscienceMachine LearningArtificial IntelligenceNeurobiologyPhysics
A
Alex Rodríguez
1Department of Mathematics, Informatics and Geosciences, University of Trieste, Italy