Mirror Mean-Field Langevin Dynamics

๐Ÿ“… 2025-05-05
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing mean-field Langevin dynamics (MFLD) struggle with optimizing probability measures over convex constrained domains due to their global diffusion term. This work proposes mirror mean-field Langevin dynamics (MMFLD), the first extension of the mirror Langevin framework to the mean-field setting, tailored for entropy-regularized nonlinear optimization under geometric constraints. Theoretically, we establish linear convergence of the continuous MMFLD in the Wasserstein metric and prove uniform-in-time propagation of chaos for its time- and particle-discretized counterpart. Our analysis integrates mirror descent principles, logarithmic Sobolev inequalities, and propagation-of-chaos techniques. As a result, MMFLD provides the first mean-field optimization framework for infinitely wide neural networks with both provable convergence guarantees and discrete-time stability under domain constraints.

Technology Category

Application Category

๐Ÿ“ Abstract
The mean-field Langevin dynamics (MFLD) minimizes an entropy-regularized nonlinear convex functional on the Wasserstein space over $mathbb{R}^d$, and has gained attention recently as a model for the gradient descent dynamics of interacting particle systems such as infinite-width two-layer neural networks. However, many problems of interest have constrained domains, which are not solved by existing mean-field algorithms due to the global diffusion term. We study the optimization of probability measures constrained to a convex subset of $mathbb{R}^d$ by proposing the emph{mirror mean-field Langevin dynamics} (MMFLD), an extension of MFLD to the mirror Langevin framework. We obtain linear convergence guarantees for the continuous MMFLD via a uniform log-Sobolev inequality, and uniform-in-time propagation of chaos results for its time- and particle-discretized counterpart.
Problem

Research questions and friction points this paper is trying to address.

Extends mean-field Langevin dynamics to constrained domains
Optimizes probability measures on convex subsets of โ„แตˆ
Ensures convergence via log-Sobolev inequality and chaos control
Innovation

Methods, ideas, or system contributions that make the work stand out.

Extends MFLD to mirror Langevin framework
Optimizes probability measures on constrained domains
Ensures linear convergence via log-Sobolev inequality
๐Ÿ”Ž Similar Papers
No similar papers found.