UniversalVTG: A Universal and Lightweight Foundation Model for Video Temporal Grounding

📅 2026-04-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses key challenges in video temporal localization—limited cross-dataset generalization, heterogeneous query styles, high computational costs of large models, and difficulties in processing long videos—by proposing a lightweight unified framework. The approach leverages large-scale cross-dataset pretraining and introduces an offline Query Unifier module that transforms diverse queries into a standardized declarative representation, thereby mitigating negative transfer during joint training. Additionally, an efficient localization head is designed to enable effective long-video understanding. Evaluated on five benchmark datasets, a single instance of the proposed model matches or surpasses specialized models in performance while using over two orders of magnitude fewer parameters than prevailing multimodal large language models, significantly enhancing both efficiency and generalizability.
📝 Abstract
Video temporal grounding (VTG) is typically tackled with dataset-specific models that transfer poorly across domains and query styles. Recent efforts to overcome this limitation have adapted large multimodal language models (MLLMs) to VTG, but their high compute cost and limited video context still hinder long-video grounding. We instead scale unified supervision while keeping the model lightweight. We present UniversalVTG, a single VTG model trained with large-scale cross-dataset pretraining. An offline Query Unifier canonicalizes heterogeneous query formats into a shared declarative space, reducing linguistic mismatch and preventing the negative transfer observed under naïve joint training. Combined with an efficient grounding head, UniversalVTG scales to long, untrimmed videos. Across diverse benchmarks-GoalStep-StepGrounding, Ego4D-NLQ, TACoS, Charades-STA, and ActivityNet-Captions-one UniversalVTG checkpoint achieves state-of-the-art performance versus dedicated VTG models. Moreover, despite being $>100\times$ smaller than recent MLLM-based approaches, UniversalVTG matches or exceeds their accuracy on multiple benchmarks, offering a practical alternative to parameter-heavy MLLMs.
Problem

Research questions and friction points this paper is trying to address.

Video Temporal Grounding
Cross-domain Generalization
Long-video Understanding
Query Style Heterogeneity
Model Transferability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Video Temporal Grounding
Foundation Model
Query Unifier
Cross-dataset Pretraining
Lightweight Architecture
🔎 Similar Papers
No similar papers found.
J
Joungbin An
The University of Texas at Austin
A
Agrim Jain
The University of Texas at Austin
Kristen Grauman
Kristen Grauman
Professor of Computer Science, University of Texas at Austin
Computer VisionMachine Learning