🤖 AI Summary
To address performance degradation in action recognition for low-illumination, high-noise videos caused by degraded spatiotemporal details, this paper proposes an end-to-end trainable three-stream dark video understanding framework. Methodologically, it fuses three complementary visual streams—raw dark frames, gamma-corrected frames, and CLAHE-enhanced frames—and introduces the novel Dynamic Fusion Framework (DFF), the first three-stream dynamic attention module incorporating bidirectional self-attention for joint learning of illumination adaptation and long-range spatiotemporal dependencies. A BERT-style spatiotemporal encoder is further employed for unified feature representation. Evaluated on ARID V1.0 and V1.5 benchmarks, the framework achieves state-of-the-art accuracy, significantly outperforming existing approaches. Ablation studies systematically validate the effectiveness and complementarity of multi-scale preprocessing, the DFF module, and the BERT-style encoder.
📝 Abstract
Action recognition in dark, low-light (under-exposed) or noisy videos is a challenging task due to visibility degradation, which can hinder critical spatiotemporal details. This paper proposes MD-BERT, a novel multi-stream approach that integrates complementary pre-processing techniques such as gamma correction and histogram equalization alongside raw dark frames to address these challenges. We introduce the Dynamic Feature Fusion (DFF) module, extending existing attentional fusion methods to a three-stream setting, thereby capturing fine-grained and global contextual information across different brightness and contrast enhancements. The fused spatiotemporal features are then processed by a BERT-based temporal model, which leverages its bidirectional self-attention to effectively capture long-range dependencies and contextual relationships across frames. Extensive experiments on the ARID V1.0 and ARID V1.5 dark video datasets show that MD-BERT outperforms existing methods, establishing a new state-of-the-art performance. Ablation studies further highlight the individual contributions of each input stream and the effectiveness of the proposed DFF and BERT modules. The official website of this work is available at: https://github.com/HrishavBakulBarua/DarkBERT