🤖 AI Summary
This work addresses the challenges of tracking robustness in semi-supervised video object segmentation caused by object disappearance and reappearance, severe deformations, and interference from similar-looking objects. To overcome these issues, we propose an automatic re-prompting framework built upon SAM 3. Our approach detects candidate objects of the same category in subsequent frames, performs object-level matching using DINOv3, and retrieves reliable anchor points from a transformation-aware feature pool. These anchors, together with the initial-frame mask, are jointly injected into the tracker to enable multi-anchor propagation. By moving beyond the conventional reliance on initial prompts alone, our method significantly improves segmentation performance under deformation, occlusion, and semantic interference, achieving a J&F score of 51.17% on the MOSEv2 test set and ranking third in the competition track.
📝 Abstract
This technical report explores the MOSEv2 track of the PVUW 2026 Challenge, which targets complex semi-supervised video object segmentation. Built on SAM~3, we develop an automatic re-prompting framework to improve robustness under target disappearance and reappearance, severe transformation, and strong same-category distractors. Our method first applies the SAM~3 detector to later frames to identify same-category object candidates, and then performs DINOv3-based object-level matching with a transformation-aware target feature pool to retrieve reliable target anchors. These anchors are injected back into the SAM~3 tracker together with the first-frame mask, enabling multi-anchor propagation rather than relying solely on the initial prompt. This simple directly benefits several core challenges of MOSEv2. Our solution achieves a J&F of 51.17% on the test set, ranking 3rd in the MOSEv2 track.