A Multi-modal Fusion Network for Terrain Perception Based on Illumination Aware

📅 2025-05-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the poor robustness and insufficient real-time performance of autonomous driving systems in perceiving road terrain under varying illumination and weather conditions, this paper proposes an illumination-aware multimodal fusion network. Methodologically: (1) we introduce a novel illumination-aware subnetwork coupled with an illumination-constrained loss function, enabling adaptive, dynamic weighting of exogenous (camera/LiDAR) and egocentric features based on ambient lighting; (2) we design a pretraining-guided end-to-end illumination-aware fusion paradigm integrating multimodal alignment, attention mechanisms, and joint multi-task optimization. Experiments demonstrate that our approach significantly outperforms state-of-the-art methods across diverse illumination scenarios, achieving a 12.7% improvement in terrain classification accuracy over unimodal baselines while exhibiting strong generalization capability. To facilitate reproducibility and further research, we publicly release a dedicated benchmark dataset.

Technology Category

Application Category

📝 Abstract
Road terrains play a crucial role in ensuring the driving safety of autonomous vehicles (AVs). However, existing sensors of AVs, including cameras and Lidars, are susceptible to variations in lighting and weather conditions, making it challenging to achieve real-time perception of road conditions. In this paper, we propose an illumination-aware multi-modal fusion network (IMF), which leverages both exteroceptive and proprioceptive perception and optimizes the fusion process based on illumination features. We introduce an illumination-perception sub-network to accurately estimate illumination features. Moreover, we design a multi-modal fusion network which is able to dynamically adjust weights of different modalities according to illumination features. We enhance the optimization process by pre-training of the illumination-perception sub-network and incorporating illumination loss as one of the training constraints. Extensive experiments demonstrate that the IMF shows a superior performance compared to state-of-the-art methods. The comparison results with single modality perception methods highlight the comprehensive advantages of multi-modal fusion in accurately perceiving road terrains under varying lighting conditions. Our dataset is available at: https://github.com/lindawang2016/IMF.
Problem

Research questions and friction points this paper is trying to address.

Real-time road terrain perception under varying lighting conditions
Dynamic multi-modal fusion based on illumination features
Improved accuracy in autonomous vehicle terrain perception
Innovation

Methods, ideas, or system contributions that make the work stand out.

Illumination-aware multi-modal fusion network
Dynamic weight adjustment based on illumination
Pre-training with illumination loss constraint
R
Rui Wang
S
Shichun Yang
Yuyi Chen
Yuyi Chen
Beihang University
Autonomous vehicleIntelligent drivingMachine learningIntelligent tire
Z
Zhuoyang Li
Z
Zexiang Tong
J
Jianyi Xu
Jiayi Lu
Jiayi Lu
Beihang University
Autonomous VehicleComputer VisionSOTIFADAS
X
Xinjie Feng
Y
Yaoguang Cao