🤖 AI Summary
Existing sports video understanding methods are constrained by single-sport, single-task settings or training-free paradigms, limiting their capability to handle high-speed dynamics, complex rules, and long-horizon reasoning. To address these challenges, we propose the first end-to-end trainable multimodal large language model framework for sports video understanding. Our method introduces a tool-augmented “active video thinking” mechanism—where the model dynamically selects and invokes specialized tools (e.g., frame extraction)—and a gated reward reinforcement learning strategy to enable proactive, goal-directed reasoning beyond passive perception. We curate 78K high-quality chain-of-thought trajectories via multi-source data distillation, followed by two-stage training: supervised fine-tuning and reinforcement learning. Evaluated on 6.7K diverse test questions spanning multiple sports and tasks, our approach significantly outperforms both closed- and open-source baselines, establishing new state-of-the-art performance and introducing the first cross-sport, multi-task benchmark for sports video understanding.
📝 Abstract
Sports video understanding presents unique challenges, requiring models to perceive high-speed dynamics, comprehend complex rules, and reason over long temporal contexts. While Multimodal Large Language Models (MLLMs) have shown promise in genral domains, the current state of research in sports remains narrowly focused: existing approaches are either single-sport centric, limited to specific tasks, or rely on training-free paradigms that lack robust, learned reasoning process. To address this gap, we introduce DeepSport, the first end-to-end trained MLLM framework designed for multi-task, multi-sport video understanding. DeepSport shifts the paradigm from passive frame processing to active, iterative reasoning, empowering the model to ``think with videos'' by dynamically interrogating content via a specialized frame-extraction tool. To enable this, we propose a data distillation pipeline that synthesizes high-quality Chain-of-Thought (CoT) trajectories from 10 diverse data source, creating a unified resource of 78k training data. We then employ a two-stage training strategy, Supervised Fine-Tuning (SFT) followed by Reinforcement Learning (RL) with a novel gated tool-use reward, to optimize the model's reasoning process. Extensive experiments on the testing benchmark of 6.7k questions demonstrate that DeepSport achieves state-of-the-art performance, significantly outperforming baselines of both proprietary model and open-source models. Our work establishes a new foundation for domain-specific video reasoning to address the complexities of diverse sports.