$ extrm{ODE}_t left( extrm{ODE}_l ight)$: Shortcutting the Time and Length in Diffusion and Flow Models for Faster Sampling

📅 2025-06-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion and continuous normalizing flow models rely on multi-step ODE solvers for sampling, resulting in low efficiency. To address this, we propose a dual-decoupled ODE solving framework that independently parameterizes temporal discretization and network depth. Specifically, we design a solver-agnostic nested ODE structure, ODEₜ(ODEₗ), enabling arbitrary time-step scheduling and dynamic Transformer block allocation. We further introduce a duration-length consistency training scheme to jointly optimize step size and layer count. Coupled with block rewiring architecture and flow matching loss, our method achieves up to 3× sampling speedup on CelebA-HQ and ImageNet, while improving FID by 3.5 under high-fidelity settings—significantly reducing latency and GPU memory consumption. The core contribution lies in the first-ever decoupling and co-optimization of ODE solving across both temporal and architectural (network-depth) dimensions.

Technology Category

Application Category

📝 Abstract
Recently, continuous normalizing flows (CNFs) and diffusion models (DMs) have been studied using the unified theoretical framework. Although such models can generate high-quality data points from a noise distribution, the sampling demands multiple iterations to solve an ordinary differential equation (ODE) with high computational complexity. Most existing methods focus on reducing the number of time steps during the sampling process to improve efficiency. In this work, we explore a complementary direction in which the quality-complexity tradeoff can be dynamically controlled in terms of time steps and in the length of the neural network. We achieve this by rewiring the blocks in the transformer-based architecture to solve an inner discretized ODE w.r.t. its length. Then, we employ time- and length-wise consistency terms during flow matching training, and as a result, the sampling can be performed with an arbitrary number of time steps and transformer blocks. Unlike others, our $ extrm{ODE}_t left( extrm{ODE}_l ight)$ approach is solver-agnostic in time dimension and decreases both latency and memory usage. Compared to the previous state of the art, image generation experiments on CelebA-HQ and ImageNet show a latency reduction of up to $3 imes$ in the most efficient sampling mode, and a FID score improvement of up to $3.5$ points for high-quality sampling. We release our code and model weights with fully reproducible experiments.
Problem

Research questions and friction points this paper is trying to address.

Reducing computational complexity in diffusion and flow models
Dynamic control of quality-complexity tradeoff in sampling
Decreasing latency and memory usage during sampling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic control of time steps and network length
Rewiring transformer blocks for inner ODE solving
Solver-agnostic approach reducing latency and memory
🔎 Similar Papers
No similar papers found.