VIVID-10M: A Dataset and Baseline for Versatile and Interactive Video Local Editing

📅 2024-11-22
🏛️ arXiv.org
📈 Citations: 3
Influential: 2
📄 PDF
🤖 AI Summary
To address three key challenges in video editing—lack of large-scale real-world datasets, high annotation costs, and limited interactivity—this paper introduces VIVID-10M, the first hybrid image-video localized editing dataset (9.7M samples), and proposes VIVID, an interactive video editing model. Methodologically, VIVID features: (1) a novel keyframe-guided iterative interaction paradigm enabling entity insertion, deletion, modification, and efficient cross-frame propagation; (2) a lightweight real-video data construction strategy balancing diversity, scalability, and training efficiency; and (3) a unified framework integrating diffusion modeling, keyframe attention propagation, joint image-video training, and instruction-driven localized mask editing. Experiments demonstrate that VIVID achieves state-of-the-art performance in both automated metrics and user studies, while significantly reducing editing latency and training overhead. The code and dataset are publicly released.

Technology Category

Application Category

📝 Abstract
Diffusion-based image editing models have made remarkable progress in recent years. However, achieving high-quality video editing remains a significant challenge. One major hurdle is the absence of open-source, large-scale video editing datasets based on real-world data, as constructing such datasets is both time-consuming and costly. Moreover, video data requires a significantly larger number of tokens for representation, which substantially increases the training costs for video editing models. Lastly, current video editing models offer limited interactivity, often making it difficult for users to express their editing requirements effectively in a single attempt. To address these challenges, this paper introduces a dataset VIVID-10M and a baseline model VIVID. VIVID-10M is the first large-scale hybrid image-video local editing dataset aimed at reducing data construction and model training costs, which comprises 9.7M samples that encompass a wide range of video editing tasks. VIVID is a Versatile and Interactive VIdeo local eDiting model trained on VIVID-10M, which supports entity addition, modification, and deletion. At its core, a keyframe-guided interactive video editing mechanism is proposed, enabling users to iteratively edit keyframes and propagate it to other frames, thereby reducing latency in achieving desired outcomes. Extensive experimental evaluations show that our approach achieves state-of-the-art performance in video local editing, surpassing baseline methods in both automated metrics and user studies. The VIVID-10M dataset and the VIVID editing model will be available at url{https://inkosizhong.github.io/VIVID/}.
Problem

Research questions and friction points this paper is trying to address.

Lack of open-source large-scale video editing datasets
High training costs due to video token representation
Limited interactivity in current video editing models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large-scale hybrid image-video dataset VIVID-10M
Keyframe-guided interactive video editing mechanism
Versatile model supports entity addition, modification, deletion
🔎 Similar Papers
No similar papers found.