🤖 AI Summary
This work addresses the inefficiency of flow-matching diffusion models, which typically require numerous sampling steps. We propose a post-training compression method that requires no retraining. Our approach centers on an online self-distillation mechanism grounded in the velocity field, integrated with trajectory-skipping learning and step-size-embedding-free lightweight modeling—enabling, for the first time, aggressive step skipping for standard flow-matching models (e.g., Flux). It supports both pretrained fusion and standalone post-training paradigms, and is the first few-shot distillation framework applicable to billion-parameter-scale diffusion models. Using less than one A100 GPU-day, we compress Flux into a 3-step sampler, achieving state-of-the-art generation quality at minimal computational cost. Moreover, the method enables efficient adaptation using only ten text–image pairs.
📝 Abstract
We present an ultra-efficient post-training method for shortcutting large-scale pre-trained flow matching diffusion models into efficient few-step samplers, enabled by novel velocity field self-distillation. While shortcutting in flow matching, originally introduced by shortcut models, offers flexible trajectory-skipping capabilities, it requires a specialized step-size embedding incompatible with existing models unless retraining from scratch$unicode{x2013}$a process nearly as costly as pretraining itself.
Our key contribution is thus imparting a more aggressive shortcut mechanism to standard flow matching models (e.g., Flux), leveraging a unique distillation principle that obviates the need for step-size embedding. Working on the velocity field rather than sample space and learning rapidly from self-guided distillation in an online manner, our approach trains efficiently, e.g., producing a 3-step Flux less than one A100 day. Beyond distillation, our method can be incorporated into the pretraining stage itself, yielding models that inherently learn efficient, few-step flows without compromising quality. This capability also enables, to our knowledge, the first few-shot distillation method (e.g., 10 text-image pairs) for dozen-billion-parameter diffusion models, delivering state-of-the-art performance at almost free cost.