🤖 AI Summary
Existing BEV-based multi-modal 3D detection methods suffer from low-resolution gridding and loss of z-axis information, limiting their ability to handle LiDAR point cloud sparsity and degrading detection accuracy. To address this, we propose SparseVoxel-Transformer—the first framework integrating high-resolution sparse voxel representation with a Transformer-based detector, eliminating the BEV dimensionality reduction step. We design a geometry-aware explicit cross-modal fusion mechanism, leveraging differentiable 2D–3D projection for precise LiDAR–image feature alignment. Furthermore, we employ sparse convolutional encoding and attention-driven multi-modal concatenation fusion. Evaluated on nuScenes, our method achieves state-of-the-art performance: +2.8% mAP overall, +6.2% AP for long-range objects (>50 m), and 40% reduction in computational overhead.
📝 Abstract
Most previous 3D object detection methods that leverage the multi-modality of LiDAR and cameras utilize the Bird's Eye View (BEV) space for intermediate feature representation. However, this space uses a low x, y-resolution and sacrifices z-axis information to reduce the overall feature resolution, which may result in declined accuracy. To tackle the problem of using low-resolution features, this paper focuses on the sparse nature of LiDAR point cloud data. From our observation, the number of occupied cells in the 3D voxels constructed from a LiDAR data can be even fewer than the number of total cells in the BEV map, despite the voxels' significantly higher resolution. Based on this, we introduce a novel sparse voxel-based transformer network for 3D object detection, dubbed as SparseVoxFormer. Instead of performing BEV feature extraction, we directly leverage sparse voxel features as the input for a transformer-based detector. Moreover, with regard to the camera modality, we introduce an explicit modality fusion approach that involves projecting 3D voxel coordinates onto 2D images and collecting the corresponding image features. Thanks to these components, our approach can leverage geometrically richer multi-modal features while even reducing the computational cost. Beyond the proof-of-concept level, we further focus on facilitating better multi-modal fusion and flexible control over the number of sparse features. Finally, thorough experimental results demonstrate that utilizing a significantly smaller number of sparse features drastically reduces computational costs in a 3D object detector while enhancing both overall and long-range performance.