🤖 AI Summary
Text-guided image editing with diffusion models achieves high visual fidelity but suffers from prohibitively high inference latency, hindering real-time applications. To address this, we propose One-Step Inversion-and-Editing—a novel framework that unifies inversion and editing into a single optimization step for the first time. We introduce the Background Shield mechanism to explicitly preserve background regions, ensuring spatial and semantic consistency. Additionally, we design Sparsified Spatial Cross-Attention to enable semantically precise and computationally efficient local editing. Our method maintains structural integrity and background coherence while reducing editing latency to under 0.2 seconds—accelerating over 150× compared to conventional multi-step approaches. This marks the first demonstration of high-fidelity, real-time text-guided image editing. The code and pretrained models are publicly available.
📝 Abstract
Text-guided image editing with diffusion models has achieved remarkable quality but suffers from prohibitive latency, hindering real-world applications. We introduce FlashEdit, a novel framework designed to enable high-fidelity, real-time image editing. Its efficiency stems from three key innovations: (1) a One-Step Inversion-and-Editing (OSIE) pipeline that bypasses costly iterative processes; (2) a Background Shield (BG-Shield) technique that guarantees background preservation by selectively modifying features only within the edit region; and (3) a Sparsified Spatial Cross-Attention (SSCA) mechanism that ensures precise, localized edits by suppressing semantic leakage to the background. Extensive experiments demonstrate that FlashEdit maintains superior background consistency and structural integrity, while performing edits in under 0.2 seconds, which is an over 150$ imes$ speedup compared to prior multi-step methods. Our code will be made publicly available at https://github.com/JunyiWuCode/FlashEdit.