ArcFlow: Unleashing 2-Step Text-to-Image Generation via High-Precision Non-Linear Flow Distillation

πŸ“… 2026-02-09
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing diffusion model distillation methods rely on linear trajectory approximations to mimic the teacher model’s nonlinear denoising process, which fails to accurately capture the time-varying directionality of the velocity field and consequently degrades generation quality. To address this limitation, this work proposes ArcFlow, a novel framework that explicitly models nonlinear flow trajectories as a mixture of continuous momentum processes, enabling exact analytical integration and thus avoiding numerical discretization errors. ArcFlow further incorporates lightweight adapters for trajectory distillation, requiring fine-tuning of fewer than 5% of the model parameters. Under a two-step sampling regime (NFE=2), ArcFlow achieves a 40Γ— speedup while preserving generation fidelity and diversity on par with the original multi-step teacher model, significantly outperforming current state-of-the-art distillation approaches.

Technology Category

Application Category

πŸ“ Abstract
Diffusion models have achieved remarkable generation quality, but they suffer from significant inference cost due to their reliance on multiple sequential denoising steps, motivating recent efforts to distill this inference process into a few-step regime. However, existing distillation methods typically approximate the teacher trajectory by using linear shortcuts, which makes it difficult to match its constantly changing tangent directions as velocities evolve across timesteps, thereby leading to quality degradation. To address this limitation, we propose ArcFlow, a few-step distillation framework that explicitly employs non-linear flow trajectories to approximate pre-trained teacher trajectories. Concretely, ArcFlow parameterizes the velocity field underlying the inference trajectory as a mixture of continuous momentum processes. This enables ArcFlow to capture velocity evolution and extrapolate coherent velocities to form a continuous non-linear trajectory within each denoising step. Importantly, this parameterization admits an analytical integration of this non-linear trajectory, which circumvents numerical discretization errors and results in high-precision approximation of the teacher trajectory. To train this parameterization into a few-step generator, we implement ArcFlow via trajectory distillation on pre-trained teacher models using lightweight adapters. This strategy ensures fast, stable convergence while preserving generative diversity and quality. Built on large-scale models (Qwen-Image-20B and FLUX.1-dev), ArcFlow only fine-tunes on less than 5% of original parameters and achieves a 40x speedup with 2 NFEs over the original multi-step teachers without significant quality degradation. Experiments on benchmarks show the effectiveness of ArcFlow both qualitatively and quantitatively.
Problem

Research questions and friction points this paper is trying to address.

diffusion models
few-step distillation
non-linear trajectory
inference acceleration
text-to-image generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

non-linear flow distillation
trajectory distillation
few-step generation
velocity field modeling
diffusion model acceleration
πŸ”Ž Similar Papers
No similar papers found.