Point2Insert: Video Object Insertion via Sparse Point Guidance

📅 2026-02-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing video object insertion methods either rely on dense mask annotations or struggle to achieve precise spatial localization, resulting in cumbersome workflows and limited performance. This work proposes the first sparse point-guided framework for video object insertion, enabling fine-grained spatial control through only a few user-provided positive and negative points. The approach employs a two-stage training strategy: first, an image-level insertion model conditioned on either points or masks is trained; then, the model is adapted to video using synthetic video pairs, enhanced by mask-guided knowledge distillation to improve point-based guidance. Experiments demonstrate that the proposed method significantly outperforms strong baselines, achieving substantial gains in both insertion success rate and localization accuracy—surpassing even models with over ten times more parameters.

Technology Category

Application Category

📝 Abstract
This paper introduces Point2Insert, a sparse-point-based framework for flexible and user-friendly object insertion in videos, motivated by the growing popularity of accurate, low-effort object placement. Existing approaches face two major challenges: mask-based insertion methods require labor-intensive mask annotations, while instruction-based methods struggle to place objects at precise locations. Point2Insert addresses these issues by requiring only a small number of sparse points instead of dense masks, eliminating the need for tedious mask drawing. Specifically, it supports both positive and negative points to indicate regions that are suitable or unsuitable for insertion, enabling fine-grained spatial control over object locations. The training of Point2Insert consists of two stages. In Stage 1, we train an insertion model that generates objects in given regions conditioned on either sparse-point prompts or a binary mask. In Stage 2, we further train the model on paired videos synthesized by an object removal model, adapting it to video insertion. Moreover, motivated by the higher insertion success rate of mask-guided editing, we leverage a mask-guided insertion model as a teacher to distill reliable insertion behavior into the point-guided model. Extensive experiments demonstrate that Point2Insert consistently outperforms strong baselines and even surpasses models with $\times$10 more parameters.
Problem

Research questions and friction points this paper is trying to address.

video object insertion
sparse point guidance
mask annotation
precise object placement
user-friendly editing
Innovation

Methods, ideas, or system contributions that make the work stand out.

sparse point guidance
video object insertion
mask distillation
two-stage training
interactive editing
🔎 Similar Papers
No similar papers found.
Y
Yu Zhou
Institute of Artificial Intelligence, China Telecom (TeleAI); Sun Yat-sen University
Xiaoyan Yang
Xiaoyan Yang
Advanced Digital Sciences Center
databasedeep learningtext mining
Bojia Zi
Bojia Zi
The Chinese University of Hong Kong
AGI
L
Lihan Zhang
Institute of Artificial Intelligence, China Telecom (TeleAI); Tsinghua University
R
Ruijie Sun
Institute of Artificial Intelligence, China Telecom (TeleAI); Fudan University
W
Weishi Zheng
Sun Yat-sen University
Haibin Huang
Haibin Huang
Principal Research Scientist at TeleAI
Computer GraphicsComputer VisionGeometric Modeling3D Deep Learning
C
Chi Zhang
Institute of Artificial Intelligence, China Telecom (TeleAI)
X
Xuelong Li
Institute of Artificial Intelligence, China Telecom (TeleAI)