pi-Flow: Policy-Based Few-Step Generation via Imitation Distillation

📅 2025-10-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion and flow-based generative models often suffer from teacher-student architectural mismatches—e.g., velocity prediction versus denoising shortcut prediction—complicating knowledge distillation and forcing a trade-off between sample quality and diversity. To address this, we propose π-Flow, a strategic flow model that eliminates structural disparity by introducing a policy output layer and imitation distillation: the student directly predicts a policy function in a single step, dynamically generating sub-step velocities. Our method leverages flow matching for policy modeling, ODE-based fast integration, and end-to-end L₂-loss-driven imitation distillation—requiring no auxiliary network evaluation. On ImageNet 256², π-Flow achieves a 1-NFE FID of 2.85, surpassing the same-architecture MeanFlow. With only four function evaluations, it significantly improves diversity for large foundation models—including FLUX.1-12B and Qwen-Image-20B—while preserving teacher-level generation fidelity.

Technology Category

Application Category

📝 Abstract
Few-step diffusion or flow-based generative models typically distill a velocity-predicting teacher into a student that predicts a shortcut towards denoised data. This format mismatch has led to complex distillation procedures that often suffer from a quality-diversity trade-off. To address this, we propose policy-based flow models ($pi$-Flow). $pi$-Flow modifies the output layer of a student flow model to predict a network-free policy at one timestep. The policy then produces dynamic flow velocities at future substeps with negligible overhead, enabling fast and accurate ODE integration on these substeps without extra network evaluations. To match the policy's ODE trajectory to the teacher's, we introduce a novel imitation distillation approach, which matches the policy's velocity to the teacher's along the policy's trajectory using a standard $ell_2$ flow matching loss. By simply mimicking the teacher's behavior, $pi$-Flow enables stable and scalable training and avoids the quality-diversity trade-off. On ImageNet 256$^2$, it attains a 1-NFE FID of 2.85, outperforming MeanFlow of the same DiT architecture. On FLUX.1-12B and Qwen-Image-20B at 4 NFEs, $pi$-Flow achieves substantially better diversity than state-of-the-art few-step methods, while maintaining teacher-level quality.
Problem

Research questions and friction points this paper is trying to address.

Addresses quality-diversity trade-off in few-step generative models
Proposes policy-based flow models for efficient ODE integration
Enables stable training through imitation distillation of teacher models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Policy-based flow models predict network-free dynamic velocities
Imitation distillation matches teacher velocity along policy trajectory
Enables fast ODE integration without extra network evaluations
🔎 Similar Papers
No similar papers found.