🤖 AI Summary
Action recognition performance degrades significantly for high-speed motions, limiting model robustness. This paper identifies a negative correlation between motion speed and recognition accuracy, and proposes a speed-aware modeling framework. Specifically, we design a Mixture-of-Window Attention (MoWA) mechanism that dynamically adjusts the temporal attention window size according to motion speed, enabling adaptive temporal modeling. Furthermore, we integrate multi-scale temporal features within a Transformer architecture to learn speed-invariant action representations. Our method achieves state-of-the-art results on five mainstream benchmarks—UCF101, HMDB51, Kinetics-400, Something-Something V2, and FineGym—demonstrating consistent improvements, particularly on high-speed actions such as badminton smashes and uneven-bars giant swings. Ablation studies confirm the effectiveness of MoWA and multi-scale feature fusion. The framework exhibits strong generalization across diverse motion speeds and datasets, validating its robustness and practical applicability.
📝 Abstract
Action recognition is a crucial task in artificial intelligence, with significant implications across various domains. We initially perform a comprehensive analysis of seven prominent action recognition methods across five widely-used datasets. This analysis reveals a critical, yet previously overlooked, observation: as the velocity of actions increases, the performance of these methods variably declines, undermining their robustness. This decline in performance poses significant challenges for their application in real-world scenarios. Building on these findings, we introduce the Velocity-Aware Action Recognition (VA-AR) framework to obtain robust action representations across different velocities. Our principal insight is that rapid actions (e.g., the giant circle backward in uneven bars or a smash in badminton) occur within short time intervals, necessitating smaller temporal attention windows to accurately capture intricate changes. Conversely, slower actions (e.g., drinking water or wiping face) require larger windows to effectively encompass the broader context. VA-AR employs a Mixture of Window Attention (MoWA) strategy, dynamically adjusting its attention window size based on the action's velocity. This adjustment enables VA-AR to obtain a velocity-aware representation, thereby enhancing the accuracy of action recognition. Extensive experiments confirm that VA-AR achieves state-of-the-art performance on the same five datasets, demonstrating VA-AR's effectiveness across a broad spectrum of action recognition scenarios.