Effects of Different Attention Mechanisms Applied on 3D Models in Video Classification

📅 2026-01-15
🏛️ Communication Systems and Applications
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the impact of various attention mechanisms on the performance of 3D video classification models under a setting where spatial resolution is enhanced at the expense of temporal cues. Building upon three canonical 3D CNN architectures—MC3, R3D, and R(2+1)D—the authors introduce Dropout layers to simulate scenarios with limited temporal information and systematically integrate ten attention modules, including CBAM, TCN, multi-head attention, and channel attention, for comparative evaluation. Experiments on the UCF101 dataset demonstrate that an enhanced R(2+1)D model combined with multi-head attention achieves 88.98% accuracy, while revealing substantial performance variations across different attention mechanisms at the class level. This work provides the first systematic analysis of how degraded temporal features critically affect 3D action recognition, offering novel insights for modeling high-resolution videos with weak temporal signals.

Technology Category

Application Category

📝 Abstract
Human action recognition has become an important research focus in computer vision due to the wide range of applications where it is used. 3D Resnet-based CNN models, particularly MC3, R3D, and R(2+1)D, have different convolutional filters to extract spatiotemporal features. This paper investigates the impact of reducing the captured knowledge from temporal data, while increasing the resolution of the frames. To establish this experiment, we created similar designs to the three originals, but with a dropout layer added before the final classifier. Secondly, we then developed ten new versions for each one of these three designs. The variants include special attention blocks within their architecture, such as convolutional block attention module (CBAM), temporal convolution networks (TCN), in addition to multi-headed and channel attention mechanisms. The purpose behind that is to observe the extent of the influence each of these blocks has on performance for the restricted-temporal models. The results of testing all the models on UCF101 have shown accuracy of 88.98% for the variant with multiheaded attention added to the modified R(2+1)D. This paper concludes the significance of missing temporal features in the performance of the newly created increased resolution models. The variants had different behavior on class-level accuracy, despite the similarity of their enhancements to the overall performance.
Problem

Research questions and friction points this paper is trying to address.

video classification
attention mechanisms
temporal features
3D CNN
human action recognition
Innovation

Methods, ideas, or system contributions that make the work stand out.

attention mechanisms
3D CNN
temporal features
video classification
multi-headed attention
🔎 Similar Papers
M
Mohammad Rasras
National University of Science and Technology POLITEHNICA Bucharest, Doctoral School of Automatic Control and Computers, Splaiul Independenței 313, 060042, Bucharest, Romania
Iuliana Marin
Iuliana Marin
Assoc. Prof. Habil. PhD. Eng., National University of Science and Technology Politehnica Bucharest
Ambient IntelligenceeHealth
S
Serban Radu
National University of Science and Technology POLITEHNICA Bucharest, Faculty of Automatic Control and Computers, Splaiul Independenței 313, 060042, Bucharest, Romania
Irina Mocanu
Irina Mocanu
University POLITEHNICA of Bucharest
Ambient intelligenceComputer VisionMachine learningComputer graphicsFormal languages