🤖 AI Summary
FlowSteer addresses the inefficiency and poor sample quality of Flow Matching (FM) generative models under few-step sampling—particularly bridging the performance gap between ReFlow and state-of-the-art distillation methods (e.g., consistency distillation and score distillation). It introduces three key innovations: (1) an online trajectory alignment mechanism to mitigate distribution shift during student model training; (2) an adversarial distillation objective tailored for ODE-based generation trajectories, directly enforcing trajectory-level consistency; and (3) a correction of the scheduling instability in FlowMatchEulerDiscreteScheduler that undermines inference robustness. Evaluated on Stable Diffusion 3 (SD3), FlowSteer achieves substantial improvements in few-step (4–8 steps) image generation quality. For the first time, ReFlow-style FM methods match—and in some cases surpass—the performance of leading distillation approaches, establishing a new paradigm for efficient, high-fidelity flow matching generation.
📝 Abstract
With the success of flow matching in visual generation, sampling efficiency remains a critical bottleneck for its practical application. Among flow models' accelerating methods, ReFlow has been somehow overlooked although it has theoretical consistency with flow matching. This is primarily due to its suboptimal performance in practical scenarios compared to consistency distillation and score distillation. In this work, we investigate this issue within the ReFlow framework and propose FlowSteer, a method unlocks the potential of ReFlow-based distillation by guiding the student along teacher's authentic generation trajectories. We first identify that Piecewised ReFlow's performance is hampered by a critical distribution mismatch during the training and propose Online Trajectory Alignment(OTA) to resolve it. Then, we introduce a adversarial distillation objective applied directly on the ODE trajectory, improving the student's adherence to the teacher's generation trajectory. Furthermore, we find and fix a previously undiscovered flaw in the widely-used FlowMatchEulerDiscreteScheduler that largely degrades few-step inference quality. Our experiment result on SD3 demonstrates our method's efficacy.