TubeRMC: Tube-conditioned Reconstruction with Mutual Constraints for Weakly-supervised Spatio-Temporal Video Grounding

📅 2025-11-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In weakly supervised spatio-temporal video grounding, existing methods suffer from object misidentification and trajectory inconsistency due to late-stage fusion. To address this, we propose TubeRMC—a novel framework that conditions initial spatio-temporal tube proposals on textual queries and introduces a tube-conditioned reconstruction mechanism. Specifically, TubeRMC employs three parallel reconstruction branches—spatio-temporal, spatial, and temporal—jointly optimized via bidirectional text-tube constraints to achieve deep alignment between language semantics and spatio-temporal structure. Unlike prior approaches that rely on pretrained models to generate candidate tubes independently, TubeRMC embeds text guidance directly into the reconstruction process, substantially improving proposal quality and temporal coherence. On the VidSTG and HCSTVG benchmarks, TubeRMC achieves state-of-the-art performance. Qualitative analysis further demonstrates significant gains in both localization accuracy and trajectory smoothness.

Technology Category

Application Category

📝 Abstract
Spatio-Temporal Video Grounding (STVG) aims to localize a spatio-temporal tube that corresponds to a given language query in an untrimmed video. This is a challenging task since it involves complex vision-language understanding and spatiotemporal reasoning. Recent works have explored weakly-supervised setting in STVG to eliminate reliance on fine-grained annotations like bounding boxes or temporal stamps. However, they typically follow a simple late-fusion manner, which generates tubes independent of the text description, often resulting in failed target identification and inconsistent target tracking. To address this limitation, we propose a Tube-conditioned Reconstruction with Mutual Constraints ( extbf{TubeRMC}) framework that generates text-conditioned candidate tubes with pre-trained visual grounding models and further refine them via tube-conditioned reconstruction with spatio-temporal constraints. Specifically, we design three reconstruction strategies from temporal, spatial, and spatio-temporal perspectives to comprehensively capture rich tube-text correspondences. Each strategy is equipped with a Tube-conditioned Reconstructor, utilizing spatio-temporal tubes as condition to reconstruct the key clues in the query. We further introduce mutual constraints between spatial and temporal proposals to enhance their quality for reconstruction. TubeRMC outperforms existing methods on two public benchmarks VidSTG and HCSTVG. Further visualization shows that TubeRMC effectively mitigates both target identification errors and inconsistent tracking.
Problem

Research questions and friction points this paper is trying to address.

Localizing spatio-temporal tubes corresponding to language queries in videos
Addressing target identification errors in weakly-supervised video grounding
Solving inconsistent target tracking without fine-grained annotations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generates text-conditioned candidate tubes
Refines tubes via tube-conditioned reconstruction
Uses mutual constraints between spatial-temporal proposals
🔎 Similar Papers
No similar papers found.