🤖 AI Summary
Existing video action recognition methods neglect the multi-granularity nature of human actions, hindering effective integration of cross-scale spatiotemporal cues. To address this, we propose a Context-Aware Network (CAN) that jointly models temporal dynamics and spatial semantics via two novel modules: a Multi-scale Temporal Cue Module (MTCM) and a Grouped Spatial Cue Module (GSCM). CAN achieves decoupled, adaptive representation by integrating multi-scale spatiotemporal attention, grouped feature-map convolutions, and hierarchical temporal pooling—all enabling end-to-end differentiable training. Evaluated on Something-Something V1/V2, Diving48, Kinetics-400, and UCF101, CAN achieves state-of-the-art accuracy of 50.4%, 63.9%, 88.4%, 74.9%, and 86.9%, respectively. Our approach significantly advances multi-granularity action modeling by explicitly capturing both fine-grained temporal rhythms and hierarchical spatial semantics—from local parts to global configurations—outperforming prevailing methods across diverse benchmarks.
📝 Abstract
Action recognition is a critical task in video understanding, requiring the comprehensive capture of spatio-temporal cues across various scales. However, existing methods often overlook the multi-granularity nature of actions. To address this limitation, we introduce the Context-Aware Network (CAN). CAN consists of two core modules: the Multi-scale Temporal Cue Module (MTCM) and the Group Spatial Cue Module (GSCM). MTCM effectively extracts temporal cues at multiple scales, capturing both fast-changing motion details and overall action flow. GSCM, on the other hand, extracts spatial cues at different scales by grouping feature maps and applying specialized extraction methods to each group. Experiments conducted on five benchmark datasets (Something-Something V1 and V2, Diving48, Kinetics-400, and UCF101) demonstrate the effectiveness of CAN. Our approach achieves competitive performance, outperforming most mainstream methods, with accuracies of 50.4% on Something-Something V1, 63.9% on Something-Something V2, 88.4% on Diving48, 74.9% on Kinetics-400, and 86.9% on UCF101. These results highlight the importance of capturing multi-scale spatio-temporal cues for robust action recognition.