Unified Video Editing with Temporal Reasoner

๐Ÿ“… 2025-12-08
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing video editing methods face a trade-off between precision and universality: expert models rely on task-specific priors (e.g., masks), limiting generalization; unified temporal models eliminate mask dependency but lack spatial localization, resulting in ambiguous instruction-region alignment. To address this, we propose Chain-of-Frames (CoF), a two-stage latent-variable reasoning paradigm for video diffusion models. CoF first predicts a latent representation of the target edit region, then synthesizes the edited frameโ€”enhanced by Rotatory Position Embedding (RoPE) for spatiotemporal alignment. This enables fine-grained, mask-free, instruction-driven editing while preserving motion consistency and supporting video-length extrapolation. Trained on only 50K sample pairs, CoF achieves state-of-the-art performance on VideoCoF-Bench, significantly improving editing accuracy and cross-scene generalization.

Technology Category

Application Category

๐Ÿ“ Abstract
Existing video editing methods face a critical trade-off: expert models offer precision but rely on task-specific priors like masks, hindering unification; conversely, unified temporal in-context learning models are mask-free but lack explicit spatial cues, leading to weak instruction-to-region mapping and imprecise localization. To resolve this conflict, we propose VideoCoF, a novel Chain-of-Frames approach inspired by Chain-of-Thought reasoning. VideoCoF enforces a ``see, reason, then edit" procedure by compelling the video diffusion model to first predict reasoning tokens (edit-region latents) before generating the target video tokens. This explicit reasoning step removes the need for user-provided masks while achieving precise instruction-to-region alignment and fine-grained video editing. Furthermore, we introduce a RoPE alignment strategy that leverages these reasoning tokens to ensure motion alignment and enable length extrapolation beyond the training duration. We demonstrate that with a minimal data cost of only 50k video pairs, VideoCoF achieves state-of-the-art performance on VideoCoF-Bench, validating the efficiency and effectiveness of our approach. Our code, weight, data are available at https://github.com/knightyxp/VideoCoF.
Problem

Research questions and friction points this paper is trying to address.

Unified video editing without masks
Precise instruction-to-region alignment
Motion alignment and length extrapolation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Chain-of-Frames approach for explicit reasoning tokens
RoPE alignment strategy for motion and length extrapolation
Mask-free video editing with precise instruction-to-region mapping
๐Ÿ”Ž Similar Papers
No similar papers found.
X
Xiangpeng Yang
University of Technology Sydney
Ji Xie
Ji Xie
Research Intern, UC Berkeley
Computer VisionImage GenerationMulti-Modal
Yiyuan Yang
Yiyuan Yang
Department of Computer Science, University of Oxford
Signal processingData miningTime seriesMultimodalityMachine learning
Y
Yan Huang
University of Technology Sydney
M
Min Xu
University of Technology Sydney
Q
Qiang Wu
University of Technology Sydney