On Fitting Flow Models with Large Sinkhorn Couplings

📅 2025-06-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Random source-target point pairing in flow model training leads to slow convergence and high inference costs. Method: We propose leveraging high-quality pairings derived from optimal transport (OT), overcoming limitations of conventional Sinkhorn algorithms—namely, small batch sizes and high entropy regularization (ε). Specifically, we scale the Sinkhorn batch size to 10⁵ (a 3–4 order-of-magnitude increase), introduce a scale-invariant coupling sharpness metric, and systematically demonstrate the critical performance gains of low-ε regularization for flow matching. Large-scale OT is enabled via GPU multi-card/multi-node sharding, integrated with Benamou–Brenier dynamical modeling. Results: Our approach significantly accelerates training and reduces ODE integration steps on both synthetic and image generation benchmarks, achieving superior FID and LPIPS scores compared to small-batch and high-ε baselines.

Technology Category

Application Category

📝 Abstract
Flow models transform data gradually from one modality (e.g. noise) onto another (e.g. images). Such models are parameterized by a time-dependent velocity field, trained to fit segments connecting pairs of source and target points. When the pairing between source and target points is given, training flow models boils down to a supervised regression problem. When no such pairing exists, as is the case when generating data from noise, training flows is much harder. A popular approach lies in picking source and target points independently. This can, however, lead to velocity fields that are slow to train, but also costly to integrate at inference time. In theory, one would greatly benefit from training flow models by sampling pairs from an optimal transport (OT) measure coupling source and target, since this would lead to a highly efficient flow solving the Benamou and Brenier dynamical OT problem. In practice, recent works have proposed to sample mini-batches of $n$ source and $n$ target points and reorder them using an OT solver to form better pairs. These works have advocated using batches of size $napprox 256$, and considered OT solvers that return couplings that are either sharp (using e.g. the Hungarian algorithm) or blurred (using e.g. entropic regularization, a.k.a. Sinkhorn). We follow in the footsteps of these works by exploring the benefits of increasing $n$ by three to four orders of magnitude, and look more carefully on the effect of the entropic regularization $varepsilon$ used in the Sinkhorn algorithm. Our analysis is facilitated by new scale invariant quantities to report the sharpness of a coupling, while our sharded computations across multiple GPU or GPU nodes allow scaling up $n$. We show that in both synthetic and image generation tasks, flow models greatly benefit when fitted with large Sinkhorn couplings, with a low entropic regularization $varepsilon$.
Problem

Research questions and friction points this paper is trying to address.

Optimizing flow models with large Sinkhorn couplings for efficiency
Exploring effects of entropic regularization in Sinkhorn algorithm
Scaling up batch size for improved flow model training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses large Sinkhorn couplings for flow models
Employs multi-GPU sharded computations for scaling
Optimizes entropic regularization in Sinkhorn algorithm
🔎 Similar Papers
No similar papers found.