SplitFlow: Flow Decomposition for Inversion-Free Text-to-Image Editing

📅 2025-10-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Addressing two key bottlenecks in rectified flow models for image editing—imprecise latent-space inversion and gradient entanglement—this paper proposes an inversion-free text-to-image editing framework. Methodologically, it (1) decomposes the target text prompt into multiple semantically distinct sub-prompts to drive independent flow-field modeling; (2) introduces an adaptive weighted soft-aggregation mechanism to fuse the resulting sub-flow fields; and (3) incorporates a projection-based gradient modulation strategy to mitigate semantic conflicts during optimization. The approach jointly enhances editing consistency and diversity. Empirically, it achieves significant improvements in semantic fidelity and attribute disentanglement on zero-shot text-guided editing tasks, yielding generated images that more accurately align with target descriptions. It comprehensively outperforms existing ODE-based and inversion-dependent methods.

Technology Category

Application Category

📝 Abstract
Rectified flow models have become a de facto standard in image generation due to their stable sampling trajectories and high-fidelity outputs. Despite their strong generative capabilities, they face critical limitations in image editing tasks: inaccurate inversion processes for mapping real images back into the latent space, and gradient entanglement issues during editing often result in outputs that do not faithfully reflect the target prompt. Recent efforts have attempted to directly map source and target distributions via ODE-based approaches without inversion; however,these methods still yield suboptimal editing quality. In this work, we propose a flow decomposition-and-aggregation framework built upon an inversion-free formulation to address these limitations. Specifically, we semantically decompose the target prompt into multiple sub-prompts, compute an independent flow for each, and aggregate them to form a unified editing trajectory. While we empirically observe that decomposing the original flow enhances diversity in the target space, generating semantically aligned outputs still requires consistent guidance toward the full target prompt. To this end, we design a projection and soft-aggregation mechanism for flow, inspired by gradient conflict resolution in multi-task learning. This approach adaptively weights the sub-target velocity fields, suppressing semantic redundancy while emphasizing distinct directions, thereby preserving both diversity and consistency in the final edited output. Experimental results demonstrate that our method outperforms existing zero-shot editing approaches in terms of semantic fidelity and attribute disentanglement. The code is available at https://github.com/Harvard-AI-and-Robotics-Lab/SplitFlow.
Problem

Research questions and friction points this paper is trying to address.

Addresses inaccurate inversion processes in text-to-image editing
Solves gradient entanglement issues during image editing tasks
Improves semantic fidelity and attribute disentanglement in editing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decomposes target prompt into semantic sub-prompts
Aggregates sub-flows via projection and soft-weighting
Enables inversion-free editing with diversity and consistency
🔎 Similar Papers
No similar papers found.
Sung-Hoon Yoon
Sung-Hoon Yoon
Postdoctoral fellow @ Harvard Medical, Ph.D/MS/BS @ KAIST
Multi-modal Visual PerceptionMedical AIComputer VisionLabel Efficient Learning
M
Minghan Li
Harvard AI and Robotics Lab, Harvard University
G
Gaspard Beaudouin
École des Ponts, Institut Polytechnique de Paris
C
Congcong Wen
Harvard AI and Robotics Lab, Harvard University; New York University Abu Dhabi
M
Muhammad Rafay Azhar
Harvard AI and Robotics Lab, Harvard University
M
Mengyu Wang
Harvard AI and Robotics Lab, Harvard University; Kempner Institute for the Study of Natural and Artificial Intelligence, Harvard University