🤖 AI Summary
In high-dimensional distribution matching, Monte Carlo estimation of the sliced Wasserstein distance (SWD) suffers from high variance, leading to noisy gradients and slow convergence. To address this, we propose the reweighted sliced Wasserstein distance (ReSWD), the first method to incorporate weighted reservoir sampling into SWD computation. ReSWD adaptively prioritizes projection directions that are information-rich and yield larger gradient contributions, thereby substantially reducing estimation variance while preserving unbiasedness. Crucially, ReSWD supports end-to-end differentiable optimization, combining computational efficiency with training stability. Experiments on synthetic benchmarks and real-world tasks—including color correction and diffusion model guidance—demonstrate that ReSWD consistently outperforms standard SWD and existing variance-reduction approaches, achieving faster convergence and superior distribution matching quality.
📝 Abstract
Distribution matching is central to many vision and graphics tasks, where the widely used Wasserstein distance is too costly to compute for high dimensional distributions. The Sliced Wasserstein Distance (SWD) offers a scalable alternative, yet its Monte Carlo estimator suffers from high variance, resulting in noisy gradients and slow convergence. We introduce Reservoir SWD (ReSWD), which integrates Weighted Reservoir Sampling into SWD to adaptively retain informative projection directions in optimization steps, resulting in stable gradients while remaining unbiased. Experiments on synthetic benchmarks and real-world tasks such as color correction and diffusion guidance show that ReSWD consistently outperforms standard SWD and other variance reduction baselines. Project page: https://reservoirswd.github.io/