🤖 AI Summary
Existing video object insertion methods either rely on dense mask annotations or struggle to achieve precise spatial localization, resulting in cumbersome workflows and limited performance. This work proposes the first sparse point-guided framework for video object insertion, enabling fine-grained spatial control through only a few user-provided positive and negative points. The approach employs a two-stage training strategy: first, an image-level insertion model conditioned on either points or masks is trained; then, the model is adapted to video using synthetic video pairs, enhanced by mask-guided knowledge distillation to improve point-based guidance. Experiments demonstrate that the proposed method significantly outperforms strong baselines, achieving substantial gains in both insertion success rate and localization accuracy—surpassing even models with over ten times more parameters.
📝 Abstract
This paper introduces Point2Insert, a sparse-point-based framework for flexible and user-friendly object insertion in videos, motivated by the growing popularity of accurate, low-effort object placement. Existing approaches face two major challenges: mask-based insertion methods require labor-intensive mask annotations, while instruction-based methods struggle to place objects at precise locations. Point2Insert addresses these issues by requiring only a small number of sparse points instead of dense masks, eliminating the need for tedious mask drawing. Specifically, it supports both positive and negative points to indicate regions that are suitable or unsuitable for insertion, enabling fine-grained spatial control over object locations. The training of Point2Insert consists of two stages. In Stage 1, we train an insertion model that generates objects in given regions conditioned on either sparse-point prompts or a binary mask. In Stage 2, we further train the model on paired videos synthesized by an object removal model, adapting it to video insertion. Moreover, motivated by the higher insertion success rate of mask-guided editing, we leverage a mask-guided insertion model as a teacher to distill reliable insertion behavior into the point-guided model. Extensive experiments demonstrate that Point2Insert consistently outperforms strong baselines and even surpasses models with $\times$10 more parameters.