🤖 AI Summary
Existing image editing methods often rely on additional training and suffer performance degradation under distribution shifts. This work proposes a training-free, continuous editing approach built upon the Rectified Flow framework, which decomposes the editing update into a fidelity term and a guidance term. By enforcing orthogonality between these two components, the method effectively decouples semantic manipulation from image fidelity preservation. The editing strength can be smoothly controlled merely by scaling the guidance term, eliminating the need for any post-hoc training. Extensive experiments demonstrate that the proposed method achieves high-quality, stable, and continuously adjustable editing across diverse tasks, significantly outperforming existing training-dependent approaches.
📝 Abstract
Continuous image editing aims to provide slider-style control of edit strength while preserving source-image fidelity and maintaining a consistent edit direction. Existing learning-based slider methods typically rely on auxiliary modules trained with synthetic or proxy supervision. This introduces additional training overhead and couples slider behavior to the training distribution, which can reduce reliability under distribution shifts in edits or domains. We propose \textit{FlowSlider}, a training-free method for continuous editing in Rectified Flow that requires no post-training. \textit{FlowSlider} decomposes FlowEdit's update into (i) a fidelity term, which acts as a source-conditioned stabilizer that preserves identity and structure, and (ii) a steering term that drives semantic transition toward the target edit. Geometric analysis and empirical measurements show that these terms are approximately orthogonal, enabling stable strength control by scaling only the steering term while keeping the fidelity term unchanged. As a result, \textit{FlowSlider} provides smooth and reliable control without post-training, improving continuous editing quality across diverse tasks.