🤖 AI Summary
This work addresses the high computational cost of the Jordan–Kinderlehrer–Otto (JKO) scheme for computing Wasserstein gradient flows, which arises from repeatedly solving variational subproblems. To overcome this bottleneck, the authors propose a self-supervised neural operator that directly learns the mapping from an input density to its corresponding JKO minimizer. They introduce a novel “Learn-to-Evolve” algorithm that jointly trains the operator and generates evolution trajectories without requiring ground-truth trajectory labels, by alternately updating the operator and synthesizing new trajectories. This framework naturally incorporates data augmentation, enhancing generalization. The method efficiently produces accurate, stable, and robust gradient flow evolutions using only a small number of initial densities, and demonstrates broad applicability across diverse energy functionals and initial conditions.
📝 Abstract
The Jordan-Kinderlehrer-Otto (JKO) scheme provides a stable variational framework for computing Wasserstein gradient flows, but its practical use is often limited by the high computational cost of repeatedly solving the JKO subproblems. We propose a self-supervised approach for learning a JKO solution operator without requiring numerical solutions of any JKO trajectories. The learned operator maps an input density directly to the minimizer of the corresponding JKO subproblem, and can be iteratively applied to efficiently generate the gradient-flow evolution. A key challenge is that only a number of initial densities are typically available for training. To address this, we introduce a Learn-to-Evolve algorithm that jointly learns the JKO operator and its induced trajectories by alternating between trajectory generation and operator updates. As training progresses, the generated data increasingly approximates true JKO trajectories. Meanwhile, this Learn-to-Evolve strategy serves as a natural form of data augmentation, significantly enhancing the generalization ability of the learned operator. Numerical experiments demonstrate the accuracy, stability, and robustness of the proposed method across various choices of energies and initial conditions.