🤖 AI Summary
This work addresses optimization and sampling in the Wasserstein space for non-geodesically convex—and even non-convex—objectives, particularly where standard convexity along generalized geodesics fails (e.g., when the log-density is difference-of-convex). To this end, we propose the semi-implicit forward-backward Euler scheme, the first method to establish rigorous convergence guarantees under non-geodesic convexity and potential nonsmoothness of the objective. Our approach integrates Wasserstein gradient flow modeling, difference-of-convex (DC) analysis, and the variational principle of generalized geodesics, systematically refining the classical forward-backward Euler discretization. Theoretical results provide multiple convergence assurances—including energy dissipation, distance contraction in Wasserstein metric, and weak convergence of probability measures—substantially broadening the scope of non-convex distributional sampling and learning. This framework delivers a novel theoretical foundation for high-dimensional non-convex Bayesian inference and generative modeling.
📝 Abstract
We study a class of optimization problems in the Wasserstein space (the space of probability measures) where the objective function is nonconvex along generalized geodesics. Specifically, the objective exhibits some difference-of-convex structure along these geodesics. The setting also encompasses sampling problems where the logarithm of the target distribution is difference-of-convex. We derive multiple convergence insights for a novel semi Forward-Backward Euler scheme under several nonconvex (and possibly nonsmooth) regimes. Notably, the semi Forward-Backward Euler is just a slight modification of the Forward-Backward Euler whose convergence is -- to our knowledge -- still unknown in our very general non-geodesically-convex setting.