Terminal Velocity Matching

📅 2025-11-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges in high-fidelity one-step/few-step generation—namely, the difficulty of flow matching in modeling transitions across diffusion time steps and the absence of terminal behavioral constraints—this paper proposes Terminal Velocity Matching (TVM). TVM is the first to shift flow matching regularization from the initial to the terminal time, with theoretical proof that it upper-bounds the 2-Wasserstein distance. Methodologically, we design a lightweight Diffusion Transformer architecture incorporating a fused attention kernel to efficiently support Jacobian-vector product backpropagation, and adopt a single-stage training strategy. On ImageNet-256×256, our method achieves FID scores of 3.29 (one-step) and 1.99 (four-step); on ImageNet-512×512, it attains 4.32 (one-step) and 2.94 (four-step), surpassing current state-of-the-art methods.

Technology Category

Application Category

📝 Abstract
We propose Terminal Velocity Matching (TVM), a generalization of flow matching that enables high-fidelity one- and few-step generative modeling. TVM models the transition between any two diffusion timesteps and regularizes its behavior at its terminal time rather than at the initial time. We prove that TVM provides an upper bound on the $2$-Wasserstein distance between data and model distributions when the model is Lipschitz continuous. However, since Diffusion Transformers lack this property, we introduce minimal architectural changes that achieve stable, single-stage training. To make TVM efficient in practice, we develop a fused attention kernel that supports backward passes on Jacobian-Vector Products, which scale well with transformer architectures. On ImageNet-256x256, TVM achieves 3.29 FID with a single function evaluation (NFE) and 1.99 FID with 4 NFEs. It similarly achieves 4.32 1-NFE FID and 2.94 4-NFE FID on ImageNet-512x512, representing state-of-the-art performance for one/few-step models from scratch.
Problem

Research questions and friction points this paper is trying to address.

Generalizing flow matching for high-fidelity few-step generation
Regularizing diffusion model behavior at terminal time steps
Achieving state-of-the-art image generation with minimal evaluations
Innovation

Methods, ideas, or system contributions that make the work stand out.

TVM generalizes flow matching for few-step generation
TVM regularizes terminal time behavior of transitions
Fused attention kernel enables efficient Jacobian-Vector Products
🔎 Similar Papers
No similar papers found.