🤖 AI Summary
In autonomous driving, coarse-grained fusion of point clouds and images in multi-modal 3D object detection leads to dimensional mismatch and inaccurate cross-modal feature alignment. To address this, we propose a fine-grained cross-modal fusion framework operating within a unified 3D space. Our key contributions are: (1) Pseudo-Raw Point Convolution (PRConv), which jointly modulates multi-source point features while preserving geometric structure; and (2) Cross-Attention Adaptive Fusion (CAAF), which enables pixel-level and point-level feature alignment and weighted aggregation over unified 3D region-of-interests (RoIs). Evaluated on KITTI and nuScenes, our method achieves consistent improvements—boosting both BEV and 3D mean Average Precision (mAP) by 2.1–3.8%. These results validate the effectiveness of unified 3D representation and fine-grained cross-modal fusion, demonstrating strong generalization across diverse benchmark datasets.
📝 Abstract
Multimodal 3D object detection has garnered considerable interest in autonomous driving. However, multimodal detectors suffer from dimension mismatches that derive from fusing 3D points with 2D pixels coarsely, which leads to sub-optimal fusion performance. In this paper, we propose a multimodal framework FGU3R to tackle the issue mentioned above via unified 3D representation and fine-grained fusion, which consists of two important components. First, we propose an efficient feature extractor for raw and pseudo points, termed Pseudo-Raw Convolution (PRConv), which modulates multimodal features synchronously and aggregates the features from different types of points on key points based on multimodal interaction. Second, a Cross-Attention Adaptive Fusion (CAAF) is designed to fuse homogeneous 3D RoI (Region of Interest) features adaptively via a cross-attention variant in a fine-grained manner. Together they make fine-grained fusion on unified 3D representation. The experiments conducted on the KITTI and nuScenes show the effectiveness of our proposed method.