PH-VAE: A Polynomial Hierarchical Variational Autoencoder Towards Disentangled Representation Learning

📅 2025-02-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address key limitations of variational autoencoders (VAEs)—including poor interpretability of latent variables, blurry reconstructions, the “prior collapse” (or “origin attraction”) effect induced by KL divergence, and overfitting in low-data regimes—this paper proposes the Polynomial Hierarchical VAE (PH-VAE). Methodologically, PH-VAE introduces: (1) a novel polynomial divergence to replace KL divergence, thereby alleviating excessive prior constraint; (2) a hierarchical latent structure governed by polynomial transformations, enhancing representation disentanglement and robustness; and (3) explicit latent-space disentanglement regularization to improve fine-grained feature modeling. Experiments demonstrate that PH-VAE significantly outperforms baseline VAEs in reconstruction sharpness, distribution fidelity, and quantitatively verifiable disentanglement—particularly under small-sample settings, where it effectively mitigates origin attraction. Overall, PH-VAE establishes a new paradigm for interpretable generative modeling.

Technology Category

Application Category

📝 Abstract
The variational autoencoder (VAE) is a simple and efficient generative artificial intelligence method for modeling complex probability distributions of various types of data, such as images and texts. However, it suffers some main shortcomings, such as lack of interpretability in the latent variables, difficulties in tuning hyperparameters while training, producing blurry, unrealistic downstream outputs or loss of information due to how it calculates loss functions and recovers data distributions, overfitting, and origin gravity effect for small data sets, among other issues. These and other limitations have caused unsatisfactory generation effects for the data with complex distributions. In this work, we proposed and developed a polynomial hierarchical variational autoencoder (PH-VAE), in which we used a polynomial hierarchical date format to generate or to reconstruct the data distributions. In doing so, we also proposed a novel Polynomial Divergence in the loss function to replace or generalize the Kullback-Leibler (KL) divergence, which results in systematic and drastic improvements in both accuracy and reproducibility of the re-constructed distribution function as well as the quality of re-constructed data images while keeping the dataset size the same but capturing fine resolution of the data. Moreover, we showed that the proposed PH-VAE has some form of disentangled representation learning ability.
Problem

Research questions and friction points this paper is trying to address.

Improve interpretability in latent variables
Enhance accuracy in reconstructed data
Enable disentangled representation learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Polynomial hierarchical data format
Novel Polynomial Divergence
Disentangled representation learning
🔎 Similar Papers
No similar papers found.
X
Xi Chen
Department of Civil and Environmental Engineering, University of California, Berkeley, CA, 94720, USA; Key Laboratory of Soft Machines and Smart Devices of Zhejiang Province, Department of Engineering Mechanics, Zhejiang University, Hangzhou, 310027, China
Shaofan Li
Shaofan Li
Professor of Applied and Computational Mechanics, University of California-Berkeley
Soft matter mechanicsAtomistic and Multiscale SimulationComputational Nano-mechanicsComputational Failure MechanicsMicro