🤖 AI Summary
To address the challenge of balancing accuracy and efficiency in 3D occupancy prediction for autonomous driving, this paper proposes a dual-branch architecture synergizing bird’s-eye-view (BEV) representation and query-driven sparse points. The BEV branch excels at modeling large objects and planar structures but suffers from fine-grained detail loss for small objects; conversely, the sparse point branch flexibly captures small objects yet struggles to efficiently represent continuous surfaces. These branches are unified via cross-attention to enable complementary 3D geometric modeling and fused with multi-scale voxel features. This work introduces the first dual-stream fusion paradigm, overcoming inherent scale-adaptivity limitations of single-representation approaches. Evaluated on Occ3D-nuScenes and Occ3D-Waymo, our method achieves state-of-the-art performance while significantly outperforming mainstream efficient methods in inference speed—demonstrating both high accuracy and real-time capability.
📝 Abstract
3D occupancy provides fine-grained 3D geometry and semantics for scene understanding which is critical for autonomous driving. Most existing methods, however, carry high compute costs, requiring dense 3D feature volume and cross-attention to effectively aggregate information. More recent works have adopted Bird's Eye View (BEV) or sparse points as scene representation with much reduced cost, but still suffer from their respective shortcomings. More concretely, BEV struggles with small objects that often experience significant information loss after being projected to the ground plane. On the other hand, points can flexibly model little objects in 3D, but is inefficient at capturing flat surfaces or large objects. To address these challenges, in this paper, we present a novel 3D occupancy prediction approach, BePo, which combines BEV and sparse points based representations. We propose a dual-branch design: a query-based sparse points branch and a BEV branch. The 3D information learned in the sparse points branch is shared with the BEV stream via cross-attention, which enriches the weakened signals of difficult objects on the BEV plane. The outputs of both branches are finally fused to generate predicted 3D occupancy. We conduct extensive experiments on the Occ3D-nuScenes and Occ3D-Waymo benchmarks that demonstrate the superiority of our proposed BePo. Moreover, BePo also delivers competitive inference speed when compared to the latest efficient approaches.