🤖 AI Summary
Existing video understanding benchmarks predominantly model temporal elements—such as camera motion, scene evolution, action progression, and attribute changes—in isolation, neglecting their dynamic interdependencies. To address this, we propose TUNA, the first fine-grained temporal understanding benchmark explicitly designed for densely dynamic videos. It encompasses two core tasks—temporal description generation and question answering—and systematically covers four spatiotemporal dimensions: camera motion, scene evolution, multi-agent interaction, and attribute dynamics. TUNA features human-annotated dense temporal labels, introduces dimension-wise decoupled evaluation metrics, and integrates a hybrid scoring mechanism combining vision-language models with rule-based fine-grained assessment. Comprehensive evaluation across state-of-the-art models reveals systematic deficiencies in action characterization, multi-agent modeling, and camera motion perception. We release a reproducible dataset and open-source evaluation code to advance robust, interpretable video temporal understanding research.
📝 Abstract
Videos are unique in their integration of temporal elements, including camera, scene, action, and attribute, along with their dynamic relationships over time. However, existing benchmarks for video understanding often treat these properties separately or narrowly focus on specific aspects, overlooking the holistic nature of video content. To address this, we introduce TUNA, a temporal-oriented benchmark for fine-grained understanding on dense dynamic videos, with two complementary tasks: captioning and QA. Our TUNA features diverse video scenarios and dynamics, assisted by interpretable and robust evaluation criteria. We evaluate several leading models on our benchmark, providing fine-grained performance assessments across various dimensions. This evaluation reveals key challenges in video temporal understanding, such as limited action description, inadequate multi-subject understanding, and insensitivity to camera motion, offering valuable insights for improving video understanding models. The data and code are available at https://friedrichor.github.io/projects/TUNA.