Context-Aware Network Based on Multi-scale Spatio-temporal Attention for Action Recognition in Videos

📅 2025-12-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing video action recognition methods neglect the multi-granularity nature of human actions, hindering effective integration of cross-scale spatiotemporal cues. To address this, we propose a Context-Aware Network (CAN) that jointly models temporal dynamics and spatial semantics via two novel modules: a Multi-scale Temporal Cue Module (MTCM) and a Grouped Spatial Cue Module (GSCM). CAN achieves decoupled, adaptive representation by integrating multi-scale spatiotemporal attention, grouped feature-map convolutions, and hierarchical temporal pooling—all enabling end-to-end differentiable training. Evaluated on Something-Something V1/V2, Diving48, Kinetics-400, and UCF101, CAN achieves state-of-the-art accuracy of 50.4%, 63.9%, 88.4%, 74.9%, and 86.9%, respectively. Our approach significantly advances multi-granularity action modeling by explicitly capturing both fine-grained temporal rhythms and hierarchical spatial semantics—from local parts to global configurations—outperforming prevailing methods across diverse benchmarks.

Technology Category

Application Category

📝 Abstract
Action recognition is a critical task in video understanding, requiring the comprehensive capture of spatio-temporal cues across various scales. However, existing methods often overlook the multi-granularity nature of actions. To address this limitation, we introduce the Context-Aware Network (CAN). CAN consists of two core modules: the Multi-scale Temporal Cue Module (MTCM) and the Group Spatial Cue Module (GSCM). MTCM effectively extracts temporal cues at multiple scales, capturing both fast-changing motion details and overall action flow. GSCM, on the other hand, extracts spatial cues at different scales by grouping feature maps and applying specialized extraction methods to each group. Experiments conducted on five benchmark datasets (Something-Something V1 and V2, Diving48, Kinetics-400, and UCF101) demonstrate the effectiveness of CAN. Our approach achieves competitive performance, outperforming most mainstream methods, with accuracies of 50.4% on Something-Something V1, 63.9% on Something-Something V2, 88.4% on Diving48, 74.9% on Kinetics-400, and 86.9% on UCF101. These results highlight the importance of capturing multi-scale spatio-temporal cues for robust action recognition.
Problem

Research questions and friction points this paper is trying to address.

Addresses multi-scale spatio-temporal cue extraction in video action recognition
Overcomes limitations in capturing multi-granularity nature of actions
Proposes a context-aware network for robust action recognition performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-scale Temporal Cue Module extracts temporal cues at multiple scales
Group Spatial Cue Module groups feature maps for spatial cue extraction
Context-Aware Network integrates multi-scale spatio-temporal attention for action recognition
🔎 Similar Papers
No similar papers found.
Xiaoyang Li
Xiaoyang Li
Southern University of Science and Technology
Integrated-sensing-communication-computationedge intelligencenetwork optimization
W
Wenzhu Yang
School of Cyber Security and Computer, Hebei University, Baoding 071000, Hebei, China; Machine Vision Engineering Research Center, Hebei University, Baoding 071000, Hebei, China
K
Kanglin Wang
School of Cyber Security and Computer, Hebei University, Baoding 071000, Hebei, China
T
Tiebiao Wang
School of Cyber Security and Computer, Hebei University, Baoding 071000, Hebei, China
Q
Qingsong Fei
School of Cyber Security and Computer, Hebei University, Baoding 071000, Hebei, China