🤖 AI Summary
This paper addresses the challenge of propagating user edits applied solely to the first frame consistently across an entire video sequence while preserving both spatial content fidelity and temporal coherence—without requiring DDIM inversion. To this end, we propose One-shot Controllable Video Editing (OCVE), the first inversion-free framework driven by visual prompts. Our method introduces Content-Consistent Sampling (CCS) to enforce intra-frame semantic fidelity, and Temporal-Content Consistent Sampling (TCS), built upon Stein Variational Gradient Descent, to explicitly model inter-frame dynamic constraints. Extensive experiments demonstrate that OCVE significantly outperforms existing inversion-based approaches across multiple benchmarks, achieving state-of-the-art performance in editing accuracy, source-content consistency, and temporal coherence.
📝 Abstract
One-shot controllable video editing (OCVE) is an important yet challenging task, aiming to propagate user edits that are made -- using any image editing tool -- on the first frame of a video to all subsequent frames, while ensuring content consistency between edited frames and source frames. To achieve this, prior methods employ DDIM inversion to transform source frames into latent noise, which is then fed into a pre-trained diffusion model, conditioned on the user-edited first frame, to generate the edited video. However, the DDIM inversion process accumulates errors, which hinder the latent noise from accurately reconstructing the source frames, ultimately compromising content consistency in the generated edited frames. To overcome it, our method eliminates the need for DDIM inversion by performing OCVE through a novel perspective based on visual prompting. Furthermore, inspired by consistency models that can perform multi-step consistency sampling to generate a sequence of content-consistent images, we propose a content consistency sampling (CCS) to ensure content consistency between the generated edited frames and the source frames. Moreover, we introduce a temporal-content consistency sampling (TCS) based on Stein Variational Gradient Descent to ensure temporal consistency across the edited frames. Extensive experiments validate the effectiveness of our approach.