Matricial Free Energy as a Gaussianizing Regularizer: Enhancing Autoencoders for Gaussian Code Generation

📅 2025-10-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Non-Gaussian latent spaces in autoencoders hinder performance in ill-posed inverse problems due to poor statistical priors. Method: This paper proposes a novel matrix-free-energy-based regularization, introducing free probability theory into autoencoder training for the first time. It formulates a differentiable matrix free energy loss that explicitly constrains the singular value distribution of the encoding matrix to match the spectral properties of Gaussian random matrices, thereby inducing approximately standard Gaussian latent variables—without explicit sampling or adversarial training. The method is architecture-agnostic and exhibits cross-dataset generalizability. Contributions/Results: Experiments demonstrate consistent improvements across multiple benchmarks: latent codes exhibit stronger Gaussianity and tighter structural compactness; reconstruction accuracy and robustness in ill-posed inverse tasks—including compressive sensing and image denoising—are significantly enhanced during both training and inference.

Technology Category

Application Category

📝 Abstract
We introduce a novel regularization scheme for autoencoders based on matricial free energy. Our approach defines a differentiable loss function in terms of the singular values of the code matrix (code dimension x batch size). From the standpoint of free probability an d random matrix theory, this loss achieves its minimum when the singular value distribution of the code matrix coincides with that of an appropriately sculpted random metric with i.i.d. Gaussian entries. Empirical simulations demonstrate that minimizing the negative matricial free energy through standard stochastic gradient-based training yields Gaussian-like codes that generalize across training and test sets. Building on this foundation, we propose a matricidal free energy maximizing autoencoder that reliably produces Gaussian codes and show its application to underdetermined inverse problems.
Problem

Research questions and friction points this paper is trying to address.

Regularizing autoencoders to generate Gaussian-like codes
Minimizing matricial free energy for code generalization
Applying Gaussian codes to underdetermined inverse problems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Matricial free energy regularizes autoencoder training
Loss function minimizes code matrix singular value deviations
Autoencoder maximizes free energy for Gaussian code generation
🔎 Similar Papers
No similar papers found.