Sports-QA: A Large-Scale Video Question Answering Benchmark for Complex and Professional Sports

📅 2024-01-03
🏛️ arXiv.org
📈 Citations: 10
Influential: 2
📄 PDF
🤖 AI Summary
Existing VideoQA datasets lack fine-grained modeling of professional sports actions, hindering effective reasoning for descriptive, temporal, causal, and counterfactual questions. To address this, we introduce Sports-QA—the first video question answering benchmark tailored to professional sports scenarios—covering multiple sports disciplines and four categories of complex reasoning tasks. Methodologically, we propose the Auto-Focus Transformer (AFT), which employs an attention-driven dynamic focusing mechanism to adaptively model multi-scale temporal information and integrates joint video–language representation learning. Extensive experiments demonstrate that AFT achieves state-of-the-art performance on Sports-QA, substantially outperforming general-purpose VideoQA models. This work constitutes the first systematic validation of an architecture explicitly designed for fine-grained sports action understanding and dynamic logical reasoning, establishing a new foundation for domain-specific VideoQA research.

Technology Category

Application Category

📝 Abstract
Reasoning over sports videos for question answering is an important task with numerous applications, such as player training and information retrieval. However, this task has not been explored due to the lack of relevant datasets and the challenging nature it presents. Most datasets for video question answering (VideoQA) focus mainly on general and coarse-grained understanding of daily-life videos, which is not applicable to sports scenarios requiring professional action understanding and fine-grained motion analysis. In this paper, we introduce the first dataset, named Sports-QA, specifically designed for the sports VideoQA task. The Sports-QA dataset includes various types of questions, such as descriptions, chronologies, causalities, and counterfactual conditions, covering multiple sports. Furthermore, to address the characteristics of the sports VideoQA task, we propose a new Auto-Focus Transformer (AFT) capable of automatically focusing on particular scales of temporal information for question answering. We conduct extensive experiments on Sports-QA, including baseline studies and the evaluation of different methods. The results demonstrate that our AFT achieves state-of-the-art performance.
Problem

Research questions and friction points this paper is trying to address.

Video Question Answering
Sports Video Analysis
Machine Understanding Limitations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sports-QA dataset
Auto-Focus Transformer
Sports video understanding
🔎 Similar Papers
No similar papers found.
Haopeng Li
Haopeng Li
PhD of Electrical Engineering, KTH Royal Institute of Technology
Mobile Visual Computing and Communication - Video Coding - Mobile Video - Visual Search
A
Andong Deng
Center for Research in Computer Vision, University of Central Florida
Qiuhong Ke
Qiuhong Ke
ARC DECRA Fellow, Senior Lecturer, Monash University
Deep LearningComputer VisionAction RecognitionVideo Understanding
J
Jun Liu
Information Systems Technology and Design (ISTD) Pillar, Singapore University of Technology and Design
Hossein Rahmani
Hossein Rahmani
Professor, Lancaster University
Computer VisionMachine LearningVideo AnalysisAction RecognitionHuman Behavior Analysis
Yulan Guo
Yulan Guo
Professor, Sun Yat-sen University
3D VisionMachine LearningRobotics
B
B. Schiele
Department of Computer Vision and Machine Learning, Max Planck Institute for Informatics, Saarland Informatics Campus
C
Chen Chen