VA-AR: Learning Velocity-Aware Action Representations with Mixture of Window Attention

📅 2025-03-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Action recognition performance degrades significantly for high-speed motions, limiting model robustness. This paper identifies a negative correlation between motion speed and recognition accuracy, and proposes a speed-aware modeling framework. Specifically, we design a Mixture-of-Window Attention (MoWA) mechanism that dynamically adjusts the temporal attention window size according to motion speed, enabling adaptive temporal modeling. Furthermore, we integrate multi-scale temporal features within a Transformer architecture to learn speed-invariant action representations. Our method achieves state-of-the-art results on five mainstream benchmarks—UCF101, HMDB51, Kinetics-400, Something-Something V2, and FineGym—demonstrating consistent improvements, particularly on high-speed actions such as badminton smashes and uneven-bars giant swings. Ablation studies confirm the effectiveness of MoWA and multi-scale feature fusion. The framework exhibits strong generalization across diverse motion speeds and datasets, validating its robustness and practical applicability.

Technology Category

Application Category

📝 Abstract
Action recognition is a crucial task in artificial intelligence, with significant implications across various domains. We initially perform a comprehensive analysis of seven prominent action recognition methods across five widely-used datasets. This analysis reveals a critical, yet previously overlooked, observation: as the velocity of actions increases, the performance of these methods variably declines, undermining their robustness. This decline in performance poses significant challenges for their application in real-world scenarios. Building on these findings, we introduce the Velocity-Aware Action Recognition (VA-AR) framework to obtain robust action representations across different velocities. Our principal insight is that rapid actions (e.g., the giant circle backward in uneven bars or a smash in badminton) occur within short time intervals, necessitating smaller temporal attention windows to accurately capture intricate changes. Conversely, slower actions (e.g., drinking water or wiping face) require larger windows to effectively encompass the broader context. VA-AR employs a Mixture of Window Attention (MoWA) strategy, dynamically adjusting its attention window size based on the action's velocity. This adjustment enables VA-AR to obtain a velocity-aware representation, thereby enhancing the accuracy of action recognition. Extensive experiments confirm that VA-AR achieves state-of-the-art performance on the same five datasets, demonstrating VA-AR's effectiveness across a broad spectrum of action recognition scenarios.
Problem

Research questions and friction points this paper is trying to address.

Decline in action recognition performance with increasing action velocity.
Need for velocity-aware action representations in diverse scenarios.
Dynamic adjustment of attention window size based on action speed.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic window size adjustment for velocity-aware recognition.
Mixture of Window Attention strategy for action representation.
Enhanced accuracy across varying action velocities.
🔎 Similar Papers
No similar papers found.
Jiangning Wei
Jiangning Wei
School of Artificial Intelligence, Beijing University of Posts and Telecommunications
Lixiong Qin
Lixiong Qin
Beijing University of Posts and Telecommunications
Face and Human PerceptionMLLM
B
Bo Yu
Beijing University of Posts and Telecommunications
T
Tianjian Zou
Beijing University of Posts and Telecommunications
C
Chuhan Yan
Macau University of Science and Technology
D
D. Xiao
China Institute of Sport Science
Y
Yang Yu
Beijing Sport University
Lan Yang
Lan Yang
Edwin & Florence Skinner Professor, Electrical & Systems Engineering, Washington Univ. in St Louis
resonatorlasernonlinear opticssensingnon-Hermitian physics
K
Ke Li
Beijing University of Posts and Telecommunications
J
Jun Liu
Beijing University of Posts and Telecommunications