🤖 AI Summary
This work addresses the high computational complexity and poor scalability of classical optimal transport (OT). To this end, we propose an efficient probabilistic measure analysis framework based on sliced optimal transport (SOT). Methodologically, we integrate integral geometry with statistical estimation: we introduce nonlinear projections and adaptive weighted slicing strategies to improve Monte Carlo approximation, and extend SOT to unbalanced, multi-marginal, and Gromov–Wasserstein settings. Theoretically and algorithmically, our framework unifies support for sliced Wasserstein distance computation, barycenter estimation, kernel construction, and embedding learning. Experiments demonstrate that the proposed approach preserves the rich geometric structure of OT while achieving significant scalability improvements. Its efficacy and practicality are validated across diverse machine learning, computer vision, and graphics tasks.
📝 Abstract
Sliced Optimal Transport (SOT) is a rapidly developing branch of optimal transport (OT) that exploits the tractability of one-dimensional OT problems. By combining tools from OT, integral geometry, and computational statistics, SOT enables fast and scalable computation of distances, barycenters, and kernels for probability measures, while retaining rich geometric structure. This paper provides a comprehensive review of SOT, covering its mathematical foundations, methodological advances, computational methods, and applications. We discuss key concepts of OT and one-dimensional OT, the role of tools from integral geometry such as Radon transform in projecting measures, and statistical techniques for estimating sliced distances. The paper further explores recent methodological advances, including non-linear projections, improved Monte Carlo approximations, statistical estimation techniques for one-dimensional optimal transport, weighted slicing techniques, and transportation plan estimation methods. Variational problems, such as minimum sliced Wasserstein estimation, barycenters, gradient flows, kernel constructions, and embeddings are examined alongside extensions to unbalanced, partial, multi-marginal, and Gromov-Wasserstein settings. Applications span machine learning, statistics, computer graphics and computer visions, highlighting SOT's versatility as a practical computational tool. This work will be of interest to researchers and practitioners in machine learning, data sciences, and computational disciplines seeking efficient alternatives to classical OT.