🤖 AI Summary
Existing multimodal large language models (MLLMs) suffer from suboptimal performance in spatio-temporal video grounding (STVG) due to misaligned training objectives and insufficient fine-grained region–word alignment capability of standard visual encoders. To address this without architectural modification, we propose a refined fine-tuning framework. Our contributions are threefold: (1) We introduce Box Chain-of-Thought—a novel, explicit modeling of the progressive reasoning process for spatio-temporal localization; (2) We design a geometric-aware supervision signal coupled with a multi-dimensional reinforcement learning reward function, jointly optimizing localization accuracy, temporal consistency, and semantic alignment; (3) We enhance the fine-grained region–word alignment capacity of off-the-shelf visual encoders. On HCSTVG-v1, our method achieves a 7.3% absolute gain in mean temporal-IoU over prior state-of-the-art methods and significantly outperforms existing MLLM-based approaches, demonstrating strong open-vocabulary generalization.
📝 Abstract
Spatio-temporal video grounding (STVG) requires localizing a target object in untrimmed videos both temporally and spatially from natural language descriptions. Despite their strong language understanding, multimodal large language models (MLLMs) underperform on STVG due to misaligned training objectives and weak fine-grained region-word alignment in standard visual encoders. To address this, we propose STVG-o1, the first framework that enables off-the-shelf MLLMs to achieve state-of-the-art STVG performance without any architectural modifications. Our method introduces a bounding-box chain-of-thought mechanism that explicitly reasons about spatio-temporal locations in an intermediate step before producing the final prediction. We further design a multi-dimensional reinforcement reward function consisting of format, consistency, temporal, spatial, and think rewards, which provides geometry-aware supervision through reinforcement fine-tuning. Evaluated on HCSTVG-v1/v2 and VidSTG, STVG-o1 sets new state-of-the-art results on HCSTVG, outperforming the best task-specific method by 7.3% m_tIoU on HCSTVG-v1, matching specialized models on VidSTG, and surpassing all existing MLLM-based approaches by large margins. It also demonstrates strong open-vocabulary generalization across datasets, establishing MLLMs as viable and powerful backbones for precise spatio-temporal grounding. Our code and models will be released.