Minimizing Occlusion Effect on Multi-View Camera Perception in BEV with Multi-Sensor Fusion

📅 2025-01-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Autonomous driving perception—particularly BEV vehicle segmentation and detection—suffers severe performance degradation under occlusions caused by rain, fog, and dust. This work first systematically quantifies the spatial distribution patterns of such occlusions in BEV space using the nuScenes dataset. We propose a cross-modal occlusion compensation mechanism grounded in BEV feature projection modeling and multi-sensor spatiotemporal alignment, enabling tightly coupled fusion of LiDAR, radar, and camera modalities. Additionally, we design an occlusion-aware adaptive weighting network to dynamically suppress unreliable visual features. Experiments demonstrate a 12.7% improvement in vehicle segmentation mIoU and a 91.3% BEV detection recall rate under heavy rain/fog conditions, significantly enhancing perception robustness in adverse weather.

Technology Category

Application Category

📝 Abstract
Autonomous driving technology is rapidly evolving, offering the potential for safer and more efficient transportation. However, the performance of these systems can be significantly compromised by the occlusion on sensors due to environmental factors like dirt, dust, rain, and fog. These occlusions severely affect vision-based tasks such as object detection, vehicle segmentation, and lane recognition. In this paper, we investigate the impact of various kinds of occlusions on camera sensor by projecting their effects from multi-view camera images of the nuScenes dataset into the Bird's-Eye View (BEV) domain. This approach allows us to analyze how occlusions spatially distribute and influence vehicle segmentation accuracy within the BEV domain. Despite significant advances in sensor technology and multi-sensor fusion, a gap remains in the existing literature regarding the specific effects of camera occlusions on BEV-based perception systems. To address this gap, we use a multi-sensor fusion technique that integrates LiDAR and radar sensor data to mitigate the performance degradation caused by occluded cameras. Our findings demonstrate that this approach significantly enhances the accuracy and robustness of vehicle segmentation tasks, leading to more reliable autonomous driving systems.
Problem

Research questions and friction points this paper is trying to address.

Autonomous Driving
Object Recognition
Environmental Obstructions
Innovation

Methods, ideas, or system contributions that make the work stand out.

LiDAR-Radar Fusion
Obstacle Recognition
Autonomous Driving
🔎 Similar Papers
No similar papers found.
S
Sanjay Kumar
Department of Electronic and Computer Engineering, University of Limerick, Ireland; Data-Driven Computer Engineering (D2iCE) Research Centre, University of Limerick, Ireland; Lero, The Irish Software Research Centre, University of Limerick, Ireland
H
Hiep Truong
Department of Electronic and Computer Engineering, University of Limerick, Ireland; Data-Driven Computer Engineering (D2iCE) Research Centre, University of Limerick, Ireland; DSW, Valeo Kronach, Germany
Sushil Sharma
Sushil Sharma
SFI CRT
Autonomous DrivingArtificial IntelligenceDeep LearningComputer Vision
Ganesh Sistu
Ganesh Sistu
Principal Artificial Intelligence Architect, Valeo Ireland
Autonomous DrivingMachine LearningComputer VisionDeep Learning
T
Tony Scanlan
Department of Electronic and Computer Engineering, University of Limerick, Ireland; Data-Driven Computer Engineering (D2iCE) Research Centre, University of Limerick, Ireland
E
Eoin Grua
Department of Electronic and Computer Engineering, University of Limerick, Ireland; Data-Driven Computer Engineering (D2iCE) Research Centre, University of Limerick, Ireland
Ciarán Eising
Ciarán Eising
University of Limerick
computer vision