Efficient Sliced Wasserstein Distance Computation via Adaptive Bayesian Optimization

📅 2025-09-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Slice Wasserstein distance (SW) suffers from slow convergence and low accuracy in optimization loops (e.g., gradient flows) due to inefficient random projection direction selection. Method: This work introduces adaptive Bayesian optimization for SW direction learning—the first such approach—proposing two dynamic direction-selection frameworks: Adaptive Bayesian Optimal SW (ABOSW) and its restart-enhanced variant, ARBOSW. The methods integrate quasi-Monte Carlo sampling, unit-sphere embedding, and lightweight iterative refinement, requiring no modification to downstream loss functions and enabling plug-and-play deployment. Results: On the QSW benchmark, ABOSW and ARBOSW match the convergence rate and accuracy of the best-performing SW variants while maintaining controllable computational overhead. They significantly improve efficiency and stability of high-dimensional optimal transport in generative modeling and image registration tasks.

Technology Category

Application Category

📝 Abstract
The sliced Wasserstein distance (SW) reduces optimal transport on $mathbb{R}^d$ to a sum of one-dimensional projections, and thanks to this efficiency, it is widely used in geometry, generative modeling, and registration tasks. Recent work shows that quasi-Monte Carlo constructions for computing SW (QSW) yield direction sets with excellent approximation error. This paper presents an alternate, novel approach: learning directions with Bayesian optimization (BO), particularly in settings where SW appears inside an optimization loop (e.g., gradient flows). We introduce a family of drop-in selectors for projection directions: BOSW, a one-shot BO scheme on the unit sphere; RBOSW, a periodic-refresh variant; ABOSW, an adaptive hybrid that seeds from competitive QSW sets and performs a few lightweight BO refinements; and ARBOSW, a restarted hybrid that periodically relearns directions during optimization. Our BO approaches can be composed with QSW and its variants (demonstrated by ABOSW/ARBOSW) and require no changes to downstream losses or gradients. We provide numerical experiments where our methods achieve state-of-the-art performance, and on the experimental suite of the original QSW paper, we find that ABOSW and ARBOSW can achieve convergence comparable to the best QSW variants with modest runtime overhead.
Problem

Research questions and friction points this paper is trying to address.

Improving sliced Wasserstein distance computation efficiency
Learning optimal projection directions via Bayesian optimization
Enhancing performance in optimization loops like gradient flows
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learning projection directions via Bayesian optimization
Hybrid approach combining quasi-Monte Carlo with BO refinements
Periodic direction refresh during optimization loops
🔎 Similar Papers
No similar papers found.
M
Manish Acharya
Department of Computer Science, Vanderbilt University, Nashville, TN , USA
David Hyde
David Hyde
Unknown affiliation
computational physicsfluid simulationmachine learninghigh-performance computing