MD-BERT: Action Recognition in Dark Videos via Dynamic Multi-Stream Fusion and Temporal Modeling

📅 2025-02-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address performance degradation in action recognition for low-illumination, high-noise videos caused by degraded spatiotemporal details, this paper proposes an end-to-end trainable three-stream dark video understanding framework. Methodologically, it fuses three complementary visual streams—raw dark frames, gamma-corrected frames, and CLAHE-enhanced frames—and introduces the novel Dynamic Fusion Framework (DFF), the first three-stream dynamic attention module incorporating bidirectional self-attention for joint learning of illumination adaptation and long-range spatiotemporal dependencies. A BERT-style spatiotemporal encoder is further employed for unified feature representation. Evaluated on ARID V1.0 and V1.5 benchmarks, the framework achieves state-of-the-art accuracy, significantly outperforming existing approaches. Ablation studies systematically validate the effectiveness and complementarity of multi-scale preprocessing, the DFF module, and the BERT-style encoder.

Technology Category

Application Category

📝 Abstract
Action recognition in dark, low-light (under-exposed) or noisy videos is a challenging task due to visibility degradation, which can hinder critical spatiotemporal details. This paper proposes MD-BERT, a novel multi-stream approach that integrates complementary pre-processing techniques such as gamma correction and histogram equalization alongside raw dark frames to address these challenges. We introduce the Dynamic Feature Fusion (DFF) module, extending existing attentional fusion methods to a three-stream setting, thereby capturing fine-grained and global contextual information across different brightness and contrast enhancements. The fused spatiotemporal features are then processed by a BERT-based temporal model, which leverages its bidirectional self-attention to effectively capture long-range dependencies and contextual relationships across frames. Extensive experiments on the ARID V1.0 and ARID V1.5 dark video datasets show that MD-BERT outperforms existing methods, establishing a new state-of-the-art performance. Ablation studies further highlight the individual contributions of each input stream and the effectiveness of the proposed DFF and BERT modules. The official website of this work is available at: https://github.com/HrishavBakulBarua/DarkBERT
Problem

Research questions and friction points this paper is trying to address.

Enhances action recognition in dark videos
Integrates multi-stream dynamic feature fusion
Improves temporal modeling with BERT
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-stream dynamic feature fusion
BERT-based temporal modeling
Gamma correction and histogram equalization
🔎 Similar Papers
No similar papers found.
S
Sharana Dharshikgan Suresh Dass
School of Information Technology, Monash University, Malaysia
H
H. Barua
Faculty of Information Technology, Monash University, Australia; Robotics and Autonomous Systems Lab, TCS Research, India
Ganesh Krishnasamy
Ganesh Krishnasamy
Monash University Malaysia
Machine learningcomputer visiondeep learning
Raveendran Paramesran
Raveendran Paramesran
Honorary Professor, Dept of Electrical Engineering, University Malaya, Kuala Lumpur, Malaysia
AI in sports and agricultureimage and video analysissignal processing
R
Raphael C.-W. Phan
School of Information Technology, Monash University, Malaysia