Sharp detection of low-dimensional structure in probability measures via dimensional logarithmic Sobolev inequalities

📅 2024-06-18
🏛️ arXiv.org
📈 Citations: 5
Influential: 1
📄 PDF
🤖 AI Summary
This work addresses the precise identification and approximation of low-dimensional structures embedded in high-dimensional probability measures. Methodologically, it introduces the *dimensional logarithmic Sobolev inequality* (d-LSI) as a unifying theoretical framework. It establishes, for the first time, an exact equivalence between d-LSI minimization and constrained KL-divergence minimization under Gaussian reference measures; for general non-Gaussian measures, it derives a unified, tight, gradient-driven upper bound on dimensionality reduction—substantially improving upon prior bounds. The approach integrates d-LSI analysis, KL-majorization optimization, gradient-flow-based dimension reduction, and extensions of Hellinger and Poincaré inequalities. Empirically, the framework significantly enhances both the accuracy of low-dimensional structure detection and sampling efficiency. In generative modeling, it yields approximations that are more compact, interpretable, and computationally efficient.

Technology Category

Application Category

📝 Abstract
Identifying low-dimensional structure in high-dimensional probability measures is an essential pre-processing step for efficient sampling. We introduce a method for identifying and approximating a target measure $pi$ as a perturbation of a given reference measure $mu$ along a few significant directions of $mathbb{R}^{d}$. The reference measure can be a Gaussian or a nonlinear transformation of a Gaussian, as commonly arising in generative modeling. Our method extends prior work on minimizing majorizations of the Kullback--Leibler divergence to identify optimal approximations within this class of measures. Our main contribution unveils a connection between the emph{dimensional} logarithmic Sobolev inequality (LSI) and approximations with this ansatz. Specifically, when the target and reference are both Gaussian, we show that minimizing the dimensional LSI is equivalent to minimizing the KL divergence restricted to this ansatz. For general non-Gaussian measures, the dimensional LSI produces majorants that uniformly improve on previous majorants for gradient-based dimension reduction. We further demonstrate the applicability of this analysis to the squared Hellinger distance, where analogous reasoning shows that the dimensional Poincar'e inequality offers improved bounds.
Problem

Research questions and friction points this paper is trying to address.

Detects low-dimensional structure in high-dimensional probability measures
Approximates target measures as perturbations of reference measures
Connects dimensional logarithmic Sobolev inequalities with optimal approximations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Detects low-dimensional structure via logarithmic Sobolev inequalities
Approximates measures as perturbations along significant directions
Improves gradient-based dimension reduction with majorants
🔎 Similar Papers
No similar papers found.
M
Matthew T.C. Li
Center for Computational Science and Engineering, MIT, USA
Tiangang Cui
Tiangang Cui
University of Sydney
Inverse problemData AssimilationMCMCSubsurface Modeling
F
Fengyi Li
Center for Computational Science and Engineering, MIT, USA
Y
Youssef M. Marzouk
Center for Computational Science and Engineering, MIT, USA
O
O. Zahm
Université Grenoble Alpes, Inria, CNRS, Grenoble INP, LJK, Grenoble, France