๐ค AI Summary
Existing video-based multimodal large language models struggle with pixel-level referring tracking due to challenges such as spatial drift, identity switching, and unstable initialization, making it difficult to achieve both spatial precision and temporal consistency. This work proposes target-specific tracking features (TSF) and temporally aligned referring representations, along with a dual-prompt decoding mechanismโ[BOX] and [SEG]โthat integrates geometric priors and semantic segmentation to enable end-to-end spatiotemporally consistent referring understanding. Built upon a class-agnostic SAM2 proposer, the method leverages a large-scale referring video dataset comprising 30,646 videos and 45,231 question-answer pairs. It achieves significant performance gains across six benchmarks, including an 8.9-point improvement in J&F on RVOS, a 5.0-point gain in visual grounding mIoU, and a 5.4-point increase in CLAIR score on GCG.
๐ Abstract
Multimodal large language models (MLLMs) have advanced from image-level reasoning to pixel-level grounding, but extending these capabilities to videos remains challenging as models must achieve spatial precision and temporally consistent reference tracking. Existing video MLLMs often rely on a static segmentation token ([SEG]) for frame-wise grounding, which provides semantics but lacks temporal context, causing spatial drift, identity switches, and unstable initialization when objects move or reappear. We introduce SPARROW, a pixel-grounded video MLLM that unifies spatial accuracy and temporal stability through two key components: (i) Target-Specific Tracked Features (TSF), which inject temporally aligned referent cues during training, and (ii) a dual-prompt design that decodes box ([BOX]) and segmentation ([SEG]) tokens to fuse geometric priors with semantic grounding. SPARROW is supported by a curated referential video dataset of 30,646 videos and 45,231 Q&A pairs and operates end-to-end without external detectors via a class-agnostic SAM2-based proposer. Integrated into three recent open-source video MLLMs (UniPixel, GLUS, and VideoGLaMM), SPARROW delivers consistent gains across six benchmarks, improving up to +8.9 J&F on RVOS, +5 mIoU on visual grounding, and +5.4 CLAIR on GCG. These results demonstrate that SPARROW substantially improves referential stability, spatial precision, and temporal coherence in pixel-grounded video understanding. Project page: https://risys-lab.github.io/SPARROW