๐ค AI Summary
Existing mean-field Langevin dynamics (MFLD) struggle with optimizing probability measures over convex constrained domains due to their global diffusion term. This work proposes mirror mean-field Langevin dynamics (MMFLD), the first extension of the mirror Langevin framework to the mean-field setting, tailored for entropy-regularized nonlinear optimization under geometric constraints. Theoretically, we establish linear convergence of the continuous MMFLD in the Wasserstein metric and prove uniform-in-time propagation of chaos for its time- and particle-discretized counterpart. Our analysis integrates mirror descent principles, logarithmic Sobolev inequalities, and propagation-of-chaos techniques. As a result, MMFLD provides the first mean-field optimization framework for infinitely wide neural networks with both provable convergence guarantees and discrete-time stability under domain constraints.
๐ Abstract
The mean-field Langevin dynamics (MFLD) minimizes an entropy-regularized nonlinear convex functional on the Wasserstein space over $mathbb{R}^d$, and has gained attention recently as a model for the gradient descent dynamics of interacting particle systems such as infinite-width two-layer neural networks. However, many problems of interest have constrained domains, which are not solved by existing mean-field algorithms due to the global diffusion term. We study the optimization of probability measures constrained to a convex subset of $mathbb{R}^d$ by proposing the emph{mirror mean-field Langevin dynamics} (MMFLD), an extension of MFLD to the mirror Langevin framework. We obtain linear convergence guarantees for the continuous MMFLD via a uniform log-Sobolev inequality, and uniform-in-time propagation of chaos results for its time- and particle-discretized counterpart.