Text-to-Image Rectified Flow as Plug-and-Play Priors

📅 2024-06-05
🏛️ arXiv.org
📈 Citations: 4
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion priors suffer from low efficiency and limited functionality in text-to-3D generation, image inversion, and editing. Method: This work pioneers Rectified Flow (RF) as a plug-and-play universal generative prior. Leveraging RF’s linear transport property and time symmetry, we design a symmetric variant enabling reversible image inversion and embed it into implicit 3D optimization frameworks (e.g., DreamFusion), trained end-to-end with SDS/VSD losses. Contribution/Results: We provide the first theoretical and empirical validation that RF can effectively replace diffusion models as a universal prior. Our approach significantly outperforms SDS/VSD baselines in text-to-3D synthesis, achieves competitive fidelity in image inversion and editing, and drastically reduces inference steps—delivering simultaneous gains in both generation quality and computational efficiency.

Technology Category

Application Category

📝 Abstract
Large-scale diffusion models have achieved remarkable performance in generative tasks. Beyond their initial training applications, these models have proven their ability to function as versatile plug-and-play priors. For instance, 2D diffusion models can serve as loss functions to optimize 3D implicit models. Rectified flow, a novel class of generative models, enforces a linear progression from the source to the target distribution and has demonstrated superior performance across various domains. Compared to diffusion-based methods, rectified flow approaches surpass in terms of generation quality and efficiency, requiring fewer inference steps. In this work, we present theoretical and experimental evidence demonstrating that rectified flow based methods offer similar functionalities to diffusion models - they can also serve as effective priors. Besides the generative capabilities of diffusion priors, motivated by the unique time-symmetry properties of rectified flow models, a variant of our method can additionally perform image inversion. Experimentally, our rectified flow-based priors outperform their diffusion counterparts - the SDS and VSD losses - in text-to-3D generation. Our method also displays competitive performance in image inversion and editing.
Problem

Research questions and friction points this paper is trying to address.

Rectified flow as priors
Superior in text-to-3D generation
Effective in image inversion
Innovation

Methods, ideas, or system contributions that make the work stand out.

Rectified flow models
Plug-and-play priors
Image inversion capability
🔎 Similar Papers
No similar papers found.
X
Xiaofeng Yang
College of Computing and Data Science, Nanyang Technological University, Singapore
C
Cheng Chen
College of Computing and Data Science, Nanyang Technological University, Singapore
Xulei Yang
Xulei Yang
Principal Scientist & Group Leader, A*STAR, Singapore
3D VisionArtificial IntelligenceMedical Imaging
Fayao Liu
Fayao Liu
Institute for Infocomm Research, A*STAR
Machine LearningComputer Vision
Guosheng Lin
Guosheng Lin
Nanyang Technological University
Computer VisionMachine Learning