🤖 AI Summary
Existing text-guided video editing methods suffer from temporal inconsistency, motion distortion, and poor cross-domain generalization, primarily due to insufficient modeling of spatiotemporal pixel correlations in the latent space. To address this, we propose STR-Match—a training-free method built upon a latent-space optimization framework for text-to-video diffusion models. Our approach introduces a novel spatiotemporal relevance matching mechanism: it quantifies inter-frame pixel correlations via STR (spatiotemporal ratio) scoring and jointly leverages 2D spatial attention with a lightweight 1D temporal module—bypassing computationally expensive 3D attention while ensuring efficient spatiotemporal coherence. Additionally, we incorporate a latent masking strategy to enhance editing fidelity. Experiments demonstrate that STR-Match significantly outperforms state-of-the-art methods on complex scenes, robustly preserving key visual attributes of source videos even under large-domain transformations, and achieving both high-fidelity generation and strong temporal consistency.
📝 Abstract
Previous text-guided video editing methods often suffer from temporal inconsistency, motion distortion, and-most notably-limited domain transformation. We attribute these limitations to insufficient modeling of spatiotemporal pixel relevance during the editing process. To address this, we propose STR-Match, a training-free video editing algorithm that produces visually appealing and spatiotemporally coherent videos through latent optimization guided by our novel STR score. The score captures spatiotemporal pixel relevance across adjacent frames by leveraging 2D spatial attention and 1D temporal modules in text-to-video (T2V) diffusion models, without the overhead of computationally expensive 3D attention mechanisms. Integrated into a latent optimization framework with a latent mask, STR-Match generates temporally consistent and visually faithful videos, maintaining strong performance even under significant domain transformations while preserving key visual attributes of the source. Extensive experiments demonstrate that STR-Match consistently outperforms existing methods in both visual quality and spatiotemporal consistency.