SpaRC: Sparse Radar-Camera Fusion for 3D Object Detection

πŸ“… 2024-11-29
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 3
✨ Influential: 1
πŸ“„ PDF
πŸ€– AI Summary
To address high false-positive rates, inaccurate 3D localization, and excessive computational overhead from BEV rasterization in radar-camera fusion for autonomous driving 3D detection, this paper proposes the Sparse Radar-Camera Fusion Transformer (SRF-Transformer). Methodologically, it introduces Sparse Frustum Fusion (SFF) β€” the first point-level cross-modal semantic alignment mechanism β€” distance-adaptive radar aggregation (RAR) to enhance range-aware feature representation, and local self-attention (LSA) to improve geometric consistency modeling. Crucially, SRF-Transformer bypasses BEV projection entirely, performing direct modeling in the native point cloud and image feature spaces. Evaluated on nuScenes and TruckScenes, it achieves 67.1 NDS and 63.1 AMOTA, respectively β€” surpassing state-of-the-art dense BEV-based and sparse query-based methods, and establishing new performance benchmarks.

Technology Category

Application Category

πŸ“ Abstract
In this work, we present SpaRC, a novel Sparse fusion transformer for 3D perception that integrates multi-view image semantics with Radar and Camera point features. The fusion of radar and camera modalities has emerged as an efficient perception paradigm for autonomous driving systems. While conventional approaches utilize dense Bird's Eye View (BEV)-based architectures for depth estimation, contemporary query-based transformers excel in camera-only detection through object-centric methodology. However, these query-based approaches exhibit limitations in false positive detections and localization precision due to implicit depth modeling. We address these challenges through three key contributions: (1) sparse frustum fusion (SFF) for cross-modal feature alignment, (2) range-adaptive radar aggregation (RAR) for precise object localization, and (3) local self-attention (LSA) for focused query aggregation. In contrast to existing methods requiring computationally intensive BEV-grid rendering, SpaRC operates directly on encoded point features, yielding substantial improvements in efficiency and accuracy. Empirical evaluations on the nuScenes and TruckScenes benchmarks demonstrate that SpaRC significantly outperforms existing dense BEV-based and sparse query-based detectors. Our method achieves state-of-the-art performance metrics of 67.1 NDS and 63.1 AMOTA. The code and pretrained models are available at https://github.com/phi-wol/sparc.
Problem

Research questions and friction points this paper is trying to address.

Improving 3D object detection accuracy for autonomous driving systems
Addressing limitations in false positive detections and localization precision
Overcoming computational inefficiency of dense BEV-based fusion methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sparse frustum fusion for cross-modal feature alignment
Range-adaptive radar aggregation for precise object localization
Local self-attention for focused query aggregation
πŸ”Ž Similar Papers
No similar papers found.