BiCo-Fusion: Bidirectional Complementary LiDAR-Camera Fusion for Semantic- and Spatial-Aware 3D Object Detection

📅 2024-06-27
🏛️ IEEE Robotics and Automation Letters
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
To address the inherent trade-off between semantic deficiency and spatial ambiguity in LiDAR–camera fusion for 3D object detection, this paper proposes a bidirectional complementary fusion framework. The framework introduces a Voxel Enhancement Module (VEM) to boost the semantic discriminability of LiDAR features and an Image Enhancement Module (IEM) to strengthen the 3D geometric awareness of camera features. Further, an Adaptive Unified Fusion (U-Fusion) mechanism is designed to enable cross-modal attention-guided dynamic weighting. Leveraging voxelized LiDAR encoding and 2D CNN-based image feature extraction, the method constructs a unified representation jointly enhanced in both semantics and geometry. Evaluated on the nuScenes benchmark, our approach significantly outperforms state-of-the-art methods, achieving substantial improvements in mAP and BEV localization accuracy—particularly for small objects and occluded scenarios.

Technology Category

Application Category

📝 Abstract
3D object detection is an important task that has been widely applied in autonomous driving. To perform this task, a new trend is to fuse multi-modal inputs, i.e., LiDAR and camera. Under such a trend, recent methods fuse these two modalities by unifying them in the same 3D space. However, during direct fusion in a unified space, the drawbacks of both modalities (LiDAR features struggle with detailed semantic information and the camera lacks accurate 3D spatial information) are also preserved, diluting semantic and spatial awareness of the final unified representation. To address the issue, this letter proposes a novel bidirectional complementary LiDAR-camera fusion framework, called BiCo-Fusion that can achieve robust semantic- and spatial-aware 3D object detection. The key insight is to fuse LiDAR and camera features in a bidirectional complementary way to enhance the semantic awareness of the LiDAR and the 3D spatial awareness of the camera. The enhanced features from both modalities are then adaptively fused to build a semantic- and spatial-aware unified representation. Specifically, we introduce Pre-Fusion consisting of a Voxel Enhancement Module (VEM) to enhance the semantic awareness of voxel features from 2D camera features and Image Enhancement Module (IEM) to enhance the 3D spatial awareness of camera features from 3D voxel features. We then introduce Unified Fusion (U-Fusion) to adaptively fuse the enhanced features from the last stage to build a unified representation. Extensive experiments demonstrate the superiority of our BiCo-Fusion against the prior arts.
Problem

Research questions and friction points this paper is trying to address.

Enhance LiDAR's semantic awareness using camera features
Improve camera's 3D spatial awareness with LiDAR data
Fuse LiDAR-camera features adaptively for robust 3D detection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bidirectional fusion enhances LiDAR and camera features
Voxel Enhancement Module boosts semantic awareness
Image Enhancement Module improves 3D spatial awareness
🔎 Similar Papers
No similar papers found.
Y
Yang Song
AI Thrust, The Hong Kong University of Science and Technology (Guangzhou), Guangdong 511458, China
L
Lin Wang
AI/CMA Thrust, HKUST(GZ) and Dept. of CSE, HKUST, Hong Kong SAR, China