🤖 AI Summary
This paper addresses cross-domain temporal grounding (TG) under the challenging zero-label target domain setting. We propose a lightweight and efficient knowledge transfer framework. Our method leverages a source-trained GRPO (Groupwise Relative Policy Optimization) model to generate multiple candidate predictions on a small set of unlabeled target videos, employing an uncertainty-quantified rollout strategy that models prediction variance to estimate confidence—enabling unsupervised pseudo-label selection and reward-weighted reinforcement learning optimization. The framework integrates vision-language representations, groupwise relative policy optimization, and an uncertainty-aware adaptive rollout mechanism. Evaluated across three benchmarks and six cross-domain settings, our approach significantly outperforms existing baselines. It achieves strong generalization with only a minimal number of target-domain videos, drastically reducing computational and memory overhead while supporting real-time inference.
📝 Abstract
Video Temporal Grounding (TG) aims to temporally locate video segments matching a natural language description (a query) in a long video. While Vision-Language Models (VLMs) are effective at holistic semantic matching, they often struggle with fine-grained temporal localisation. Recently, Group Relative Policy Optimisation (GRPO) reformulates the inference process as a reinforcement learning task, enabling fine-grained grounding and achieving strong in-domain performance. However, GRPO relies on labelled data, making it unsuitable in unlabelled domains. Moreover, because videos are large and expensive to store and process, performing full-scale adaptation introduces prohibitive latency and computational overhead, making it impractical for real-time deployment. To overcome both problems, we introduce a Data-Efficient Unlabelled Cross-domain Temporal Grounding method, from which a model is first trained on a labelled source domain, then adapted to a target domain using only a small number of unlabelled videos from the target domain. This approach eliminates the need for target annotation and keeps both computational and storage overhead low enough to run in real time. Specifically, we introduce. Uncertainty-quantified Rollout Policy Adaptation (URPA) for cross-domain knowledge transfer in learning video temporal grounding without target labels. URPA generates multiple candidate predictions using GRPO rollouts, averages them to form a pseudo label, and estimates confidence from the variance across these rollouts. This confidence then weights the training rewards, guiding the model to focus on reliable supervision. Experiments on three datasets across six cross-domain settings show that URPA generalises well using only a few unlabelled target videos. Codes will be released once published.