🤖 AI Summary
To address the inherent trade-off between semantic deficiency and spatial ambiguity in LiDAR–camera fusion for 3D object detection, this paper proposes a bidirectional complementary fusion framework. The framework introduces a Voxel Enhancement Module (VEM) to boost the semantic discriminability of LiDAR features and an Image Enhancement Module (IEM) to strengthen the 3D geometric awareness of camera features. Further, an Adaptive Unified Fusion (U-Fusion) mechanism is designed to enable cross-modal attention-guided dynamic weighting. Leveraging voxelized LiDAR encoding and 2D CNN-based image feature extraction, the method constructs a unified representation jointly enhanced in both semantics and geometry. Evaluated on the nuScenes benchmark, our approach significantly outperforms state-of-the-art methods, achieving substantial improvements in mAP and BEV localization accuracy—particularly for small objects and occluded scenarios.
📝 Abstract
3D object detection is an important task that has been widely applied in autonomous driving. To perform this task, a new trend is to fuse multi-modal inputs, i.e., LiDAR and camera. Under such a trend, recent methods fuse these two modalities by unifying them in the same 3D space. However, during direct fusion in a unified space, the drawbacks of both modalities (LiDAR features struggle with detailed semantic information and the camera lacks accurate 3D spatial information) are also preserved, diluting semantic and spatial awareness of the final unified representation. To address the issue, this letter proposes a novel bidirectional complementary LiDAR-camera fusion framework, called BiCo-Fusion that can achieve robust semantic- and spatial-aware 3D object detection. The key insight is to fuse LiDAR and camera features in a bidirectional complementary way to enhance the semantic awareness of the LiDAR and the 3D spatial awareness of the camera. The enhanced features from both modalities are then adaptively fused to build a semantic- and spatial-aware unified representation. Specifically, we introduce Pre-Fusion consisting of a Voxel Enhancement Module (VEM) to enhance the semantic awareness of voxel features from 2D camera features and Image Enhancement Module (IEM) to enhance the 3D spatial awareness of camera features from 3D voxel features. We then introduce Unified Fusion (U-Fusion) to adaptively fuse the enhanced features from the last stage to build a unified representation. Extensive experiments demonstrate the superiority of our BiCo-Fusion against the prior arts.