🤖 AI Summary
This study investigates the impact of various attention mechanisms on the performance of 3D video classification models under a setting where spatial resolution is enhanced at the expense of temporal cues. Building upon three canonical 3D CNN architectures—MC3, R3D, and R(2+1)D—the authors introduce Dropout layers to simulate scenarios with limited temporal information and systematically integrate ten attention modules, including CBAM, TCN, multi-head attention, and channel attention, for comparative evaluation. Experiments on the UCF101 dataset demonstrate that an enhanced R(2+1)D model combined with multi-head attention achieves 88.98% accuracy, while revealing substantial performance variations across different attention mechanisms at the class level. This work provides the first systematic analysis of how degraded temporal features critically affect 3D action recognition, offering novel insights for modeling high-resolution videos with weak temporal signals.
📝 Abstract
Human action recognition has become an important research focus in computer vision due to the wide range of applications where it is used. 3D Resnet-based CNN models, particularly MC3, R3D, and R(2+1)D, have different convolutional filters to extract spatiotemporal features. This paper investigates the impact of reducing the captured knowledge from temporal data, while increasing the resolution of the frames. To establish this experiment, we created similar designs to the three originals, but with a dropout layer added before the final classifier. Secondly, we then developed ten new versions for each one of these three designs. The variants include special attention blocks within their architecture, such as convolutional block attention module (CBAM), temporal convolution networks (TCN), in addition to multi-headed and channel attention mechanisms. The purpose behind that is to observe the extent of the influence each of these blocks has on performance for the restricted-temporal models. The results of testing all the models on UCF101 have shown accuracy of 88.98% for the variant with multiheaded attention added to the modified R(2+1)D. This paper concludes the significance of missing temporal features in the performance of the newly created increased resolution models. The variants had different behavior on class-level accuracy, despite the similarity of their enhancements to the overall performance.