FlowOpt: Fast Optimization Through Whole Flow Processes for Training-Free Editing

πŸ“… 2025-10-24
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Iterative sampling in diffusion and flow-matching models precludes gradient-based end-to-end image control, forcing existing methods to optimize stepwise. To address this, we propose FlowOptβ€”the first zeroth-order optimization framework targeting the entire sampling trajectory, treating flow models as black boxes without requiring backpropagation for end-to-end controllable generation. Its key contributions are: (1) the first gradient-free optimization over the complete sampling path; (2) support for intermediate monitoring and early stopping; and (3) theoretical convergence guarantees to the global optimum under adaptive step sizes. Experiments demonstrate that FlowOpt achieves state-of-the-art performance on inverse noise estimation, text-guided editing, and image inpainting, while maintaining computational overhead comparable to mainstream methods.

Technology Category

Application Category

πŸ“ Abstract
The remarkable success of diffusion and flow-matching models has ignited a surge of works on adapting them at test time for controlled generation tasks. Examples range from image editing to restoration, compression and personalization. However, due to the iterative nature of the sampling process in those models, it is computationally impractical to use gradient-based optimization to directly control the image generated at the end of the process. As a result, existing methods typically resort to manipulating each timestep separately. Here we introduce FlowOpt - a zero-order (gradient-free) optimization framework that treats the entire flow process as a black box, enabling optimization through the whole sampling path without backpropagation through the model. Our method is both highly efficient and allows users to monitor the intermediate optimization results and perform early stopping if desired. We prove a sufficient condition on FlowOpt's step-size, under which convergence to the global optimum is guaranteed. We further show how to empirically estimate this upper bound so as to choose an appropriate step-size. We demonstrate how FlowOpt can be used for image editing, showcasing two options: (i) inversion (determining the initial noise that generates a given image), and (ii) directly steering the edited image to be similar to the source image while conforming to a target text prompt. In both cases, FlowOpt achieves state-of-the-art results while using roughly the same number of neural function evaluations (NFEs) as existing methods. Code and examples are available on the project's webpage.
Problem

Research questions and friction points this paper is trying to address.

Optimizing diffusion models without training for image editing tasks
Enabling gradient-free optimization through entire sampling processes
Achieving state-of-the-art editing results with efficient computation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Zero-order optimization through entire flow process
Black box approach without backpropagation
Efficient gradient-free sampling path optimization
πŸ”Ž Similar Papers
No similar papers found.
O
Or Ronai
Technion - Israel Institute of Technology
V
Vladimir Kulikov
Technion - Israel Institute of Technology
Tomer Michaeli
Tomer Michaeli
Associate Professor, ECE, Technion; Visiting Researcher, Google DeepMind
Computer VisionMachine LearningImage ProcessingSignal Processing