π€ AI Summary
This work proposes a causal, frame-by-frame video-to-video diffusion editing model that overcomes the limitations of existing methods, which often rely on fixed-length inputs and incur high computational costs, making them inefficient for variable-length videos. By extending 2D image diffusion models into the temporal domain, the approach introduces temporal causal conditioning and a residual flow diffusion mechanism that conditions each frameβs generation on the previously predicted frame, thereby focusing explicitly on inter-frame changes. This enables lightweight temporal modeling without sacrificing performance. The method outperforms image-level editing approaches in tasks such as style transfer and object removal, achieving results comparable to full 3D spatiotemporal models while retaining the computational efficiency of image-based models and supporting videos of arbitrary length.
π Abstract
Instructional video editing applies edits to an input video using only text prompts, enabling intuitive natural-language control. Despite rapid progress, most methods still require fixed-length inputs and substantial compute. Meanwhile, autoregressive video generation enables efficient variable-length synthesis, yet remains under-explored for video editing. We introduce a causal, efficient video editing model that edits variable-length videos frame by frame. For efficiency, we start from a 2D image-to-image (I2I) diffusion model and adapt it to video-to-video (V2V) editing by conditioning the edit at time step t on the model's prediction at t-1. To leverage videos'temporal redundancy, we propose a new I2I diffusion forward process formulation that encourages the model to predict the residual between the target output and the previous prediction. We call this Residual Flow Diffusion Model (RFDM), which focuses the denoising process on changes between consecutive frames. Moreover, we propose a new benchmark that better ranks state-of-the-art methods for editing tasks. Trained on paired video data for global/local style transfer and object removal, RFDM surpasses I2I-based methods and competes with fully spatiotemporal (3D) V2V models, while matching the compute of image models and scaling independently of input video length. More content can be found in: https://smsd75.github.io/RFDM_page/