🤖 AI Summary
To address the challenge of efficiently transferring knowledge from pretrained diffusion models to flow matching (FM) models, this paper proposes Diff2Flow—a novel framework enabling systematic diffusion-to-FM alignment. Diff2Flow introduces three key techniques: timestep rescaling, interpolation path alignment, and an analytical mapping from diffusion predictions to FM velocity fields. This constitutes the first principled, inference-cost-free paradigm for adapting diffusion priors to FM models via lightweight fine-tuning. Built upon the Stable Diffusion architecture, it reuses diffusion priors by tuning only a small number of parameters. Extensive experiments demonstrate that Diff2Flow significantly outperforms both naive FM baselines and diffusion-based fine-tuning approaches across multiple tasks. It achieves state-of-the-art or competitive performance while markedly improving parameter efficiency—validating its effectiveness as a scalable, knowledge-transfer-aware FM adaptation method.
📝 Abstract
Diffusion models have revolutionized generative tasks through high-fidelity outputs, yet flow matching (FM) offers faster inference and empirical performance gains. However, current foundation FM models are computationally prohibitive for finetuning, while diffusion models like Stable Diffusion benefit from efficient architectures and ecosystem support. This work addresses the critical challenge of efficiently transferring knowledge from pre-trained diffusion models to flow matching. We propose Diff2Flow, a novel framework that systematically bridges diffusion and FM paradigms by rescaling timesteps, aligning interpolants, and deriving FM-compatible velocity fields from diffusion predictions. This alignment enables direct and efficient FM finetuning of diffusion priors with no extra computation overhead. Our experiments demonstrate that Diff2Flow outperforms na""ive FM and diffusion finetuning particularly under parameter-efficient constraints, while achieving superior or competitive performance across diverse downstream tasks compared to state-of-the-art methods. We will release our code at https://github.com/CompVis/diff2flow.