🤖 AI Summary
This work addresses the Monge optimal transport problem by proposing mGradNets, an end-to-end learning framework based on monotone gradient neural networks. Unlike conventional indirect approaches, mGradNets directly parameterize the space of monotone gradient mappings—enforcing structural priors that guarantee compliance with Brenier’s theorem (i.e., gradients of convex potential functions) and incorporating the Monge–Ampère equation into a differentiable training loss. This enables efficient, high-fidelity approximation of the optimal transport map. Experiments demonstrate that mGradNets significantly outperform existing baselines across diverse distribution matching tasks. Furthermore, the method is successfully deployed in coordinated control of robot swarms, validating its generalizability and practical deployability in real-world applications.
📝 Abstract
Monotone gradient functions play a central role in solving the Monge formulation of the optimal transport problem, which arises in modern applications ranging from fluid dynamics to robot swarm control. When the transport cost is the squared Euclidean distance, Brenier's theorem guarantees that the unique optimal map is the gradient of a convex function, namely a monotone gradient map, and it satisfies a Monge-Ampère equation. In [arXiv:2301.10862] [arXiv:2404.07361], we proposed Monotone Gradient Networks (mGradNets), neural networks that directly parameterize the space of monotone gradient maps. In this work, we leverage mGradNets to directly learn the optimal transport mapping by minimizing a training loss function defined using the Monge-Ampère equation. We empirically show that the structural bias of mGradNets facilitates the learning of optimal transport maps and employ our method for a robot swarm control problem.