🤖 AI Summary
To address the scarcity of large-scale paired training data in instruction-driven video editing, this paper proposes a low-cost pretraining paradigm that introduces in-context learning to unpaired video clips for generic editing operations—including insertion, replacement, and deletion—guided by natural language instructions. Built upon the HunyuanVideo text-to-video (T2V) framework, our approach integrates large-scale self-supervised pretraining on unlabeled videos with fine-tuning on a small set of high-quality paired samples, effectively balancing instruction alignment and generation fidelity. Crucially, it eliminates reliance on densely annotated data and endows foundational generative models with zero-shot editing capability. Experiments demonstrate absolute improvements of 12% in instruction-following accuracy and 15% in editing quality over prior methods, while achieving superior visual fidelity and semantic consistency compared to current state-of-the-art approaches.
📝 Abstract
Despite the rapid progress of instruction-based image editing, its extension to video remains underexplored, primarily due to the prohibitive cost and complexity of constructing large-scale paired video editing datasets. To address this challenge, we introduce a low-cost pretraining strategy for instruction-based video editing that leverages in-context learning from unpaired video clips. We show that pretraining a foundation video generation model with this strategy endows it with general editing capabilities, such as adding, replacing, or deleting operations, according to input editing instructions. The pretrained model can then be efficiently refined with a small amount of high-quality paired editing data. Built upon HunyuanVideoT2V, our framework first pretrains on approximately 1M real video clips to learn basic editing concepts, and subsequently fine-tunes on fewer than 150k curated editing pairs to extend more editing tasks and improve the editing quality. Comparative experiments show that our method surpasses existing instruction-based video editing approaches in both instruction alignment and visual fidelity, achieving a 12% improvement in editing instruction following and a 15% improvement in editing quality.