🤖 AI Summary
Large multimodal generative models suffer from low inference efficiency, typically requiring 40–100 diffusion steps, while existing few-step acceleration methods rely on pretrained teacher models, exhibit training instability, or incur high GPU memory overhead. To address these limitations, we propose TwinFlow—a self-adversarial flow matching framework that eliminates the need for a fixed teacher model and introduces no auxiliary discriminators. TwinFlow couples the flow matching objective with a self-supervised adversarial mechanism, enabling end-to-end single-step forward generation. Evaluated via full-parameter fine-tuning on large-scale models such as Qwen-Image-20B, TwinFlow achieves a GenEval score of 0.83 in just one inference step—matching the performance of the 100-step baseline—while reducing computational cost by 100×. The method demonstrates superior efficiency, training stability, and scalability across diverse model scales and modalities.
📝 Abstract
Recent advances in large multi-modal generative models have demonstrated impressive capabilities in multi-modal generation, including image and video generation. These models are typically built upon multi-step frameworks like diffusion and flow matching, which inherently limits their inference efficiency (requiring 40-100 Number of Function Evaluations (NFEs)). While various few-step methods aim to accelerate the inference, existing solutions have clear limitations. Prominent distillation-based methods, such as progressive and consistency distillation, either require an iterative distillation procedure or show significant degradation at very few steps (< 4-NFE). Meanwhile, integrating adversarial training into distillation (e.g., DMD/DMD2 and SANA-Sprint) to enhance performance introduces training instability, added complexity, and high GPU memory overhead due to the auxiliary trained models. To this end, we propose TwinFlow, a simple yet effective framework for training 1-step generative models that bypasses the need of fixed pretrained teacher models and avoids standard adversarial networks during training, making it ideal for building large-scale, efficient models. On text-to-image tasks, our method achieves a GenEval score of 0.83 in 1-NFE, outperforming strong baselines like SANA-Sprint (a GAN loss-based framework) and RCGM (a consistency-based framework). Notably, we demonstrate the scalability of TwinFlow by full-parameter training on Qwen-Image-20B and transform it into an efficient few-step generator. With just 1-NFE, our approach matches the performance of the original 100-NFE model on both the GenEval and DPG-Bench benchmarks, reducing computational cost by $100 imes$ with minor quality degradation. Project page is available at https://zhenglin-cheng.com/twinflow.