🤖 AI Summary
TrackNet variants face two critical bottlenecks in high-speed, small-object tracking: poor occlusion robustness and motion-direction ambiguity. Versions V1–V3 rely solely on visual cues, limiting occlusion resilience; V4 employs absolute-difference motion inputs, discarding motion polarity and inducing directional ambiguity. This paper proposes a novel spatiotemporal tracking architecture addressing both issues. Its core contributions are: (1) a Motion Direction Decoupling (MDD) module that explicitly models and preserves motion polarity; and (2) a residual-driven Spatiotemporal Refinement (R-STR) module, leveraging Transformer-based factorized spatiotemporal context modeling to jointly integrate visual features and symbolic motion polarity, enabling coarse-to-fine recovery of occluded targets. Evaluated on the TrackNetV2 dataset, our method achieves 98.59% F1-score and 97.33% accuracy—surpassing state-of-the-art—while incurring only a 3.7% computational overhead, preserving real-time performance.
📝 Abstract
The TrackNet series has established a strong baseline for fast-moving small object tracking in sports. However, existing iterations face significant limitations: V1-V3 struggle with occlusions due to a reliance on purely visual cues, while TrackNetV4, despite introducing motion inputs, suffers from directional ambiguity as its absolute difference method discards motion polarity. To overcome these bottlenecks, we propose TrackNetV5, a robust architecture integrating two novel mechanisms. First, to recover lost directional priors, we introduce the Motion Direction Decoupling (MDD) module. Unlike V4, MDD decomposes temporal dynamics into signed polarity fields, explicitly encoding both movement occurrence and trajectory direction. Second, we propose the Residual-Driven Spatio-Temporal Refinement (R-STR) head. Operating on a coarse-to-fine paradigm, this Transformer-based module leverages factorized spatio-temporal contexts to estimate a corrective residual, effectively recovering occluded targets. Extensive experiments on the TrackNetV2 dataset demonstrate that TrackNetV5 achieves a new state-of-the-art F1-score of 0.9859 and an accuracy of 0.9733, significantly outperforming previous versions. Notably, this performance leap is achieved with a marginal 3.7% increase in FLOPs compared to V4, maintaining real-time inference capabilities while delivering superior tracking precision.