ReSWD: ReSTIR'd, not shaken. Combining Reservoir Sampling and Sliced Wasserstein Distance for Variance Reduction

📅 2025-10-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In high-dimensional distribution matching, Monte Carlo estimation of the sliced Wasserstein distance (SWD) suffers from high variance, leading to noisy gradients and slow convergence. To address this, we propose the reweighted sliced Wasserstein distance (ReSWD), the first method to incorporate weighted reservoir sampling into SWD computation. ReSWD adaptively prioritizes projection directions that are information-rich and yield larger gradient contributions, thereby substantially reducing estimation variance while preserving unbiasedness. Crucially, ReSWD supports end-to-end differentiable optimization, combining computational efficiency with training stability. Experiments on synthetic benchmarks and real-world tasks—including color correction and diffusion model guidance—demonstrate that ReSWD consistently outperforms standard SWD and existing variance-reduction approaches, achieving faster convergence and superior distribution matching quality.

Technology Category

Application Category

📝 Abstract
Distribution matching is central to many vision and graphics tasks, where the widely used Wasserstein distance is too costly to compute for high dimensional distributions. The Sliced Wasserstein Distance (SWD) offers a scalable alternative, yet its Monte Carlo estimator suffers from high variance, resulting in noisy gradients and slow convergence. We introduce Reservoir SWD (ReSWD), which integrates Weighted Reservoir Sampling into SWD to adaptively retain informative projection directions in optimization steps, resulting in stable gradients while remaining unbiased. Experiments on synthetic benchmarks and real-world tasks such as color correction and diffusion guidance show that ReSWD consistently outperforms standard SWD and other variance reduction baselines. Project page: https://reservoirswd.github.io/
Problem

Research questions and friction points this paper is trying to address.

Reduces high variance in Sliced Wasserstein Distance estimation
Improves gradient stability for distribution matching optimization
Enhances convergence in vision and graphics tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines reservoir sampling with sliced Wasserstein distance
Adaptively retains informative projection directions during optimization
Reduces variance while maintaining unbiased gradient estimates
🔎 Similar Papers
No similar papers found.