InsViE-1M: Effective Instruction-based Video Editing with Elaborate Dataset Construction

📅 2025-03-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current instruction-driven video editing methods are hindered by the scarcity of high-quality triplets (source video / edited video / natural language instruction), resulting in low-resolution, short-duration training data and poor editing fidelity. To address this, we introduce InsViE-1M—the first million-scale, high-quality instruction-driven video editing dataset—supporting pure natural language instructions without requiring masks or attribute inputs. We propose a novel GPT-4o-guided two-stage edit-and-filter pipeline, incorporating frame-propagation-based editing, image-to-video triplet construction, and multi-stage supervised training to significantly enhance instruction following and spatiotemporal consistency. Extensive experiments demonstrate that our approach achieves state-of-the-art performance across editing accuracy, visual quality, and motion coherence. Both the code and the InsViE-1M dataset are publicly released.

Technology Category

Application Category

📝 Abstract
Instruction-based video editing allows effective and interactive editing of videos using only instructions without extra inputs such as masks or attributes. However, collecting high-quality training triplets (source video, edited video, instruction) is a challenging task. Existing datasets mostly consist of low-resolution, short duration, and limited amount of source videos with unsatisfactory editing quality, limiting the performance of trained editing models. In this work, we present a high-quality Instruction-based Video Editing dataset with 1M triplets, namely InsViE-1M. We first curate high-resolution and high-quality source videos and images, then design an effective editing-filtering pipeline to construct high-quality editing triplets for model training. For a source video, we generate multiple edited samples of its first frame with different intensities of classifier-free guidance, which are automatically filtered by GPT-4o with carefully crafted guidelines. The edited first frame is propagated to subsequent frames to produce the edited video, followed by another round of filtering for frame quality and motion evaluation. We also generate and filter a variety of video editing triplets from high-quality images. With the InsViE-1M dataset, we propose a multi-stage learning strategy to train our InsViE model, progressively enhancing its instruction following and editing ability. Extensive experiments demonstrate the advantages of our InsViE-1M dataset and the trained model over state-of-the-art works. Codes are available at InsViE.
Problem

Research questions and friction points this paper is trying to address.

High-quality video editing dataset construction challenge
Limited existing datasets with poor resolution and quality
Instruction-based video editing model training improvement need
Innovation

Methods, ideas, or system contributions that make the work stand out.

High-quality 1M triplet dataset construction
Classifier-free guidance for edited samples
Multi-stage learning strategy for training
🔎 Similar Papers
No similar papers found.
Yuhui Wu
Yuhui Wu
PolyU
Image/Video Editinglow-light enhancement
Liyi Chen
Liyi Chen
PhD at PolyU, HK
R
Rui Li
The Hong Kong Polytechnic University, OPPO Research Institute
S
Shihao Wang
The Hong Kong Polytechnic University
C
Chenxi Xie
The Hong Kong Polytechnic University, OPPO Research Institute
L
Lei Zhang
The Hong Kong Polytechnic University, OPPO Research Institute