Edit-Your-Interest: Efficient Video Editing via Feature Most-Similar Propagation

๐Ÿ“… 2025-10-14
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing video editing methods suffer from high computational overhead, excessive memory consumption, temporal inconsistency, and visual artifacts (e.g., blur, blocking), hindering simultaneous efficiency and fidelity. This paper proposes a lightweight, text-driven zero-shot video editing framework grounded in diffusion models. Our approach integrates spatiotemporal attention, feature caching, and propagation techniques to address these limitations. Specifically, we (1) construct a spatiotemporal feature memory bank with a dynamic update mechanism; (2) introduce a most-similar feature propagation strategy to enhance inter-frame consistency; and (3) employ cross-attention-guided instance mask extraction for fine-grained object editing while preserving background integrity. Evaluated on multiple benchmarks, our method significantly outperforms state-of-the-art approaches, achieving superior visual quality and temporal coherence at substantially lower computational and memory costs.

Technology Category

Application Category

๐Ÿ“ Abstract
Text-to-image (T2I) diffusion models have recently demonstrated significant progress in video editing. However, existing video editing methods are severely limited by their high computational overhead and memory consumption. Furthermore, these approaches often sacrifice visual fidelity, leading to undesirable temporal inconsistencies and artifacts such as blurring and pronounced mosaic-like patterns. We propose Edit-Your-Interest, a lightweight, text-driven, zero-shot video editing method. Edit-Your-Interest introduces a spatio-temporal feature memory to cache features from previous frames, significantly reducing computational overhead compared to full-sequence spatio-temporal modeling approaches. Specifically, we first introduce a Spatio-Temporal Feature Memory bank (SFM), which is designed to efficiently cache and retain the crucial image tokens processed by spatial attention. Second, we propose the Feature Most-Similar Propagation (FMP) method. FMP propagates the most relevant tokens from previous frames to subsequent ones, preserving temporal consistency. Finally, we introduce an SFM update algorithm that continuously refreshes the cached features, ensuring their long-term relevance and effectiveness throughout the video sequence. Furthermore, we leverage cross-attention maps to automatically extract masks for the instances of interest. These masks are seamlessly integrated into the diffusion denoising process, enabling fine-grained control over target objects and allowing Edit-Your-Interest to perform highly accurate edits while robustly preserving the background integrity. Extensive experiments decisively demonstrate that the proposed Edit-Your-Interest outperforms state-of-the-art methods in both efficiency and visual fidelity, validating its superior effectiveness and practicality.
Problem

Research questions and friction points this paper is trying to address.

Reducing computational overhead in video editing
Eliminating temporal inconsistencies and visual artifacts
Enhancing fine-grained control over target objects
Innovation

Methods, ideas, or system contributions that make the work stand out.

Spatio-temporal feature memory reduces computational overhead
Feature most-similar propagation ensures temporal consistency
Cross-attention maps enable automatic mask extraction for editing
๐Ÿ”Ž Similar Papers
No similar papers found.
Y
Yi Zuo
Xidian University,Xiโ€™an, 710071, Shaanxi Province, China
Z
Zitao Wang
Xidian University,Xiโ€™an, 710071, Shaanxi Province, China
Lingling Li
Lingling Li
Associate Director of Biostatistics, Sanofi Genzyme
Causal inferencemissing datapropensity scoresequential analytic methodsdrug and vaccine safety
X
Xu Liu
Xidian University,Xiโ€™an, 710071, Shaanxi Province, China
F
Fang Liu
Xidian University,Xiโ€™an, 710071, Shaanxi Province, China
Licheng Jiao
Licheng Jiao
Distinguished Professor of Xidian University, IEEE Fellow
Neural NetworksComputational IntelligenceEvolutionary ComputationRemote SensingPattern Recognition.