🤖 AI Summary
Existing video multimodal large language models (MLLMs) model bounding boxes as autoregressive text sequences, leading to verbose outputs, accumulating spatial errors over time, and localization drift. This work proposes a collaborative framework integrating a video LLM with an open-vocabulary detector. Its core innovations are: (1) a Reference-Semantic Token (RST) mechanism, which leverages the user query’s semantics both as a control signal and as a substitute for textual embeddings, enabling end-to-end referring understanding and grounding; and (2) Tubular Temporal Regularization (TTReg), which enforces temporal consistency of object trajectories across frames. By circumventing error-prone autoregressive coordinate generation, the method significantly improves spatiotemporal localization accuracy and enhances complex semantic reasoning—such as causal and sequential inference—on fine-grained video understanding benchmarks including STVG and GroundedVQA. Results validate the efficacy of co-modeling detection priors with large language models.
📝 Abstract
Spatio-temporal grounding and reasoning aims to locate the temporal segment and spatial region of an event in a video given a user query, while also reasoning about semantics such as causality, temporal order, and action relationships. To achieve this, current MLLMs primarily treats bounding boxes as text tokens and generates them autoregressively. However, such autoregressive spatial decoding leads to very-long output sequences, causing spatial errors to accumulated over time and the localization results to progressively drift across a video. To address this, we present a Detector-Empowered Video LLM, short for DEViL, which couples a Video LLM with an open-vocabulary detector (OVD). Specifically, the MLLM and detector are connected via a reference-semantic token (RST) that distills the user query into a rich semantic representation. Unlike tokens that merely serve as spatial prompts or segmentor switches, the RST functions as both a control signal and a replacement for the OVD's text embedding, enabling end-to-end learning of both referential understanding and spatial localization. Furthermore, we propose a tube-mined temporal regularization (TTReg) within OVD, which drives the OVD to generate temporally-consistent queries for target objects, thereby ensuring effective temporal association. Experiments demonstrate that DEViL achieves strong performance across various fine-grained video understanding tasks, particularly STVG and GroundedVQA. Code will be released on https://github.com/gaostar123/DeViL.