🤖 AI Summary
Addressing two key bottlenecks in rectified flow models for image editing—imprecise latent-space inversion and gradient entanglement—this paper proposes an inversion-free text-to-image editing framework. Methodologically, it (1) decomposes the target text prompt into multiple semantically distinct sub-prompts to drive independent flow-field modeling; (2) introduces an adaptive weighted soft-aggregation mechanism to fuse the resulting sub-flow fields; and (3) incorporates a projection-based gradient modulation strategy to mitigate semantic conflicts during optimization. The approach jointly enhances editing consistency and diversity. Empirically, it achieves significant improvements in semantic fidelity and attribute disentanglement on zero-shot text-guided editing tasks, yielding generated images that more accurately align with target descriptions. It comprehensively outperforms existing ODE-based and inversion-dependent methods.
📝 Abstract
Rectified flow models have become a de facto standard in image generation due to their stable sampling trajectories and high-fidelity outputs. Despite their strong generative capabilities, they face critical limitations in image editing tasks: inaccurate inversion processes for mapping real images back into the latent space, and gradient entanglement issues during editing often result in outputs that do not faithfully reflect the target prompt. Recent efforts have attempted to directly map source and target distributions via ODE-based approaches without inversion; however,these methods still yield suboptimal editing quality. In this work, we propose a flow decomposition-and-aggregation framework built upon an inversion-free formulation to address these limitations. Specifically, we semantically decompose the target prompt into multiple sub-prompts, compute an independent flow for each, and aggregate them to form a unified editing trajectory. While we empirically observe that decomposing the original flow enhances diversity in the target space, generating semantically aligned outputs still requires consistent guidance toward the full target prompt. To this end, we design a projection and soft-aggregation mechanism for flow, inspired by gradient conflict resolution in multi-task learning. This approach adaptively weights the sub-target velocity fields, suppressing semantic redundancy while emphasizing distinct directions, thereby preserving both diversity and consistency in the final edited output. Experimental results demonstrate that our method outperforms existing zero-shot editing approaches in terms of semantic fidelity and attribute disentanglement. The code is available at https://github.com/Harvard-AI-and-Robotics-Lab/SplitFlow.