Parsimonious Gaussian mixture models with piecewise-constant eigenvalue profiles

📅 2025-07-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the trade-off between excessive parameters in full-covariance Gaussian Mixture Models (GMMs) and insufficient modeling capacity of spherical GMMs in high-dimensional settings, this paper proposes a parsimonious GMM with piecewise-constant eigenvalue spectra—enabling arbitrary specification of eigenvalue multiplicities to jointly achieve anisotropic covariance modeling and parameter efficiency. Methodologically, we extend the Mixture of Probabilistic Principal Component Analyzers (MPPCA) framework and design a component-wise penalized EM algorithm with guaranteed monotonicity, unifying estimation of both model parameters and eigenvalue multiplicity hyperparameters. Experiments demonstrate that the proposed model significantly improves the likelihood–parsimony trade-off across density estimation, clustering, and single-image denoising tasks, outperforming classical GMMs and their low-rank variants.

Technology Category

Application Category

📝 Abstract
Gaussian mixture models (GMMs) are ubiquitous in statistical learning, particularly for unsupervised problems. While full GMMs suffer from the overparameterization of their covariance matrices in high-dimensional spaces, spherical GMMs (with isotropic covariance matrices) certainly lack flexibility to fit certain anisotropic distributions. Connecting these two extremes, we introduce a new family of parsimonious GMMs with piecewise-constant covariance eigenvalue profiles. These extend several low-rank models like the celebrated mixtures of probabilistic principal component analyzers (MPPCA), by enabling any possible sequence of eigenvalue multiplicities. If the latter are prespecified, then we can naturally derive an expectation-maximization (EM) algorithm to learn the mixture parameters. Otherwise, to address the notoriously-challenging issue of jointly learning the mixture parameters and hyperparameters, we propose a componentwise penalized EM algorithm, whose monotonicity is proven. We show the superior likelihood-parsimony tradeoffs achieved by our models on a variety of unsupervised experiments: density fitting, clustering and single-image denoising.
Problem

Research questions and friction points this paper is trying to address.

Overparameterization in high-dimensional Gaussian mixture models
Lack of flexibility in spherical Gaussian mixture models
Joint learning of mixture parameters and hyperparameters
Innovation

Methods, ideas, or system contributions that make the work stand out.

Parsimonious GMMs with piecewise-constant eigenvalues
EM algorithm for mixture parameter learning
Componentwise penalized EM for hyperparameter optimization
🔎 Similar Papers
No similar papers found.