🤖 AI Summary
This work investigates the cross-distribution transferability of slicers in optimal transport: under the min-Sliced Transport Plan (min-STP) framework, can a slicer optimized on one distribution pair generalize to unseen distribution pairs? To address this, we propose a transferable min-STP framework, establishing theoretical guarantees on slicer stability under distributional perturbations and enabling cross-task transfer; we further introduce mini-batch optimization for scalability. Our method integrates sliced optimal transport, closed-form solutions for 1D Wasserstein distances, statistical error analysis, and amortized training. Experiments on point-cloud alignment and streaming generation demonstrate single-shot matching with significant inference speedup while preserving high accuracy. The core contribution is the first theoretical foundation for slicer transferability—rigorously characterizing its generalization capability—and empirical validation across shape analysis, image generation, and multimodal alignment.
📝 Abstract
Optimal Transport (OT) offers a powerful framework for finding correspondences between distributions and addressing matching and alignment problems in various areas of computer vision, including shape analysis, image generation, and multimodal tasks. The computation cost of OT, however, hinders its scalability. Slice-based transport plans have recently shown promise for reducing the computational cost by leveraging the closed-form solutions of 1D OT problems. These methods optimize a one-dimensional projection (slice) to obtain a conditional transport plan that minimizes the transport cost in the ambient space. While efficient, these methods leave open the question of whether learned optimal slicers can transfer to new distribution pairs under distributional shift. Understanding this transferability is crucial in settings with evolving data or repeated OT computations across closely related distributions. In this paper, we study the min-Sliced Transport Plan (min-STP) framework and investigate the transferability of optimized slicers: can a slicer trained on one distribution pair yield effective transport plans for new, unseen pairs? Theoretically, we show that optimized slicers remain close under slight perturbations of the data distributions, enabling efficient transfer across related tasks. To further improve scalability, we introduce a minibatch formulation of min-STP and provide statistical guarantees on its accuracy. Empirically, we demonstrate that the transferable min-STP achieves strong one-shot matching performance and facilitates amortized training for point cloud alignment and flow-based generative modeling.