🤖 AI Summary
To address the high computational cost and communication bottlenecks caused by dense BEV modeling in cooperative perception, this paper proposes the first fully sparse collaborative 3D object detection framework. Our method abandons conventional dense BEV feature representations and instead introduces an enhanced sparse 3D backbone, a query-driven temporal context learning module, and a lightweight, robust detection head specifically designed for sparse features. Additionally, we propose a cross-vehicle sparse BEV feature alignment mechanism. Evaluated on OPV2V, DAIR-V2X, and their time-aligned variants, our approach significantly outperforms state-of-the-art methods—achieving higher detection accuracy while reducing computational complexity and communication bandwidth by large margins. These results empirically validate the effectiveness and scalability of the fully sparse paradigm for long-range, resource-constrained cooperative perception.
📝 Abstract
Cooperative perception can increase the view field and decrease the occlusion of an ego vehicle, hence improving the perception performance and safety of autonomous driving. Despite the success of previous works on cooperative object detection, they mostly operate on dense Bird's Eye View (BEV) feature maps, which are computationally demanding and can hardly be extended to long-range detection problems. More efficient fully sparse frameworks are rarely explored. In this work, we design a fully sparse framework, SparseAlign, with three key features: an enhanced sparse 3D backbone, a query-based temporal context learning module, and a robust detection head specially tailored for sparse features. Extensive experimental results on both OPV2V and DairV2X datasets show that our framework, despite its sparsity, outperforms the state of the art with less communication bandwidth requirements. In addition, experiments on the OPV2Vt and DairV2Xt datasets for time-aligned cooperative object detection also show a significant performance gain compared to the baseline works.