FlashEdit: Decoupling Speed, Structure, and Semantics for Precise Image Editing

📅 2025-09-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Text-guided image editing with diffusion models achieves high visual fidelity but suffers from prohibitively high inference latency, hindering real-time applications. To address this, we propose One-Step Inversion-and-Editing—a novel framework that unifies inversion and editing into a single optimization step for the first time. We introduce the Background Shield mechanism to explicitly preserve background regions, ensuring spatial and semantic consistency. Additionally, we design Sparsified Spatial Cross-Attention to enable semantically precise and computationally efficient local editing. Our method maintains structural integrity and background coherence while reducing editing latency to under 0.2 seconds—accelerating over 150× compared to conventional multi-step approaches. This marks the first demonstration of high-fidelity, real-time text-guided image editing. The code and pretrained models are publicly available.

Technology Category

Application Category

📝 Abstract
Text-guided image editing with diffusion models has achieved remarkable quality but suffers from prohibitive latency, hindering real-world applications. We introduce FlashEdit, a novel framework designed to enable high-fidelity, real-time image editing. Its efficiency stems from three key innovations: (1) a One-Step Inversion-and-Editing (OSIE) pipeline that bypasses costly iterative processes; (2) a Background Shield (BG-Shield) technique that guarantees background preservation by selectively modifying features only within the edit region; and (3) a Sparsified Spatial Cross-Attention (SSCA) mechanism that ensures precise, localized edits by suppressing semantic leakage to the background. Extensive experiments demonstrate that FlashEdit maintains superior background consistency and structural integrity, while performing edits in under 0.2 seconds, which is an over 150$ imes$ speedup compared to prior multi-step methods. Our code will be made publicly available at https://github.com/JunyiWuCode/FlashEdit.
Problem

Research questions and friction points this paper is trying to address.

Reducing latency in text-guided diffusion image editing
Preserving background consistency during semantic modifications
Preventing semantic leakage while maintaining structural integrity
Innovation

Methods, ideas, or system contributions that make the work stand out.

One-step inversion-editing pipeline bypasses iterative processes
Background Shield technique selectively modifies edit regions
Sparsified Spatial Cross-Attention prevents semantic leakage
🔎 Similar Papers
J
Junyi Wu
Shanghai Jiao Tong University
Zhiteng Li
Zhiteng Li
Shanghai Jiao Tong University
Large Language ModelsModel CompressionComputer Vision
Haotong Qin
Haotong Qin
ETH Zürich
TinyMLModel CompressionComputer VisionDeep Learning
X
Xiaohong Liu
Shanghai Jiao Tong University
Linghe Kong
Linghe Kong
Shanghai Jiao Tong University
Internet of ThingsMobile computingBig data
Y
Yulun Zhang
Shanghai Jiao Tong University
X
Xiaokang Yang
Shanghai Jiao Tong University