๐ค AI Summary
Existing video editing methods suffer from high computational overhead, excessive memory consumption, temporal inconsistency, and visual artifacts (e.g., blur, blocking), hindering simultaneous efficiency and fidelity. This paper proposes a lightweight, text-driven zero-shot video editing framework grounded in diffusion models. Our approach integrates spatiotemporal attention, feature caching, and propagation techniques to address these limitations. Specifically, we (1) construct a spatiotemporal feature memory bank with a dynamic update mechanism; (2) introduce a most-similar feature propagation strategy to enhance inter-frame consistency; and (3) employ cross-attention-guided instance mask extraction for fine-grained object editing while preserving background integrity. Evaluated on multiple benchmarks, our method significantly outperforms state-of-the-art approaches, achieving superior visual quality and temporal coherence at substantially lower computational and memory costs.
๐ Abstract
Text-to-image (T2I) diffusion models have recently demonstrated significant progress in video editing.
However, existing video editing methods are severely limited by their high computational overhead and memory consumption.
Furthermore, these approaches often sacrifice visual fidelity, leading to undesirable temporal inconsistencies and artifacts such as blurring and pronounced mosaic-like patterns.
We propose Edit-Your-Interest, a lightweight, text-driven, zero-shot video editing method.
Edit-Your-Interest introduces a spatio-temporal feature memory to cache features from previous frames, significantly reducing computational overhead compared to full-sequence spatio-temporal modeling approaches.
Specifically, we first introduce a Spatio-Temporal Feature Memory bank (SFM), which is designed to efficiently cache and retain the crucial image tokens processed by spatial attention.
Second, we propose the Feature Most-Similar Propagation (FMP) method. FMP propagates the most relevant tokens from previous frames to subsequent ones, preserving temporal consistency.
Finally, we introduce an SFM update algorithm that continuously refreshes the cached features, ensuring their long-term relevance and effectiveness throughout the video sequence.
Furthermore, we leverage cross-attention maps to automatically extract masks for the instances of interest.
These masks are seamlessly integrated into the diffusion denoising process, enabling fine-grained control over target objects and allowing Edit-Your-Interest to perform highly accurate edits while robustly preserving the background integrity.
Extensive experiments decisively demonstrate that the proposed Edit-Your-Interest outperforms state-of-the-art methods in both efficiency and visual fidelity, validating its superior effectiveness and practicality.