🤖 AI Summary
Diffusion priors suffer from low efficiency and limited functionality in text-to-3D generation, image inversion, and editing. Method: This work pioneers Rectified Flow (RF) as a plug-and-play universal generative prior. Leveraging RF’s linear transport property and time symmetry, we design a symmetric variant enabling reversible image inversion and embed it into implicit 3D optimization frameworks (e.g., DreamFusion), trained end-to-end with SDS/VSD losses. Contribution/Results: We provide the first theoretical and empirical validation that RF can effectively replace diffusion models as a universal prior. Our approach significantly outperforms SDS/VSD baselines in text-to-3D synthesis, achieves competitive fidelity in image inversion and editing, and drastically reduces inference steps—delivering simultaneous gains in both generation quality and computational efficiency.
📝 Abstract
Large-scale diffusion models have achieved remarkable performance in generative tasks. Beyond their initial training applications, these models have proven their ability to function as versatile plug-and-play priors. For instance, 2D diffusion models can serve as loss functions to optimize 3D implicit models. Rectified flow, a novel class of generative models, enforces a linear progression from the source to the target distribution and has demonstrated superior performance across various domains. Compared to diffusion-based methods, rectified flow approaches surpass in terms of generation quality and efficiency, requiring fewer inference steps. In this work, we present theoretical and experimental evidence demonstrating that rectified flow based methods offer similar functionalities to diffusion models - they can also serve as effective priors. Besides the generative capabilities of diffusion priors, motivated by the unique time-symmetry properties of rectified flow models, a variant of our method can additionally perform image inversion. Experimentally, our rectified flow-based priors outperform their diffusion counterparts - the SDS and VSD losses - in text-to-3D generation. Our method also displays competitive performance in image inversion and editing.