🤖 AI Summary
This work addresses the limitations of existing 3D occupancy prediction methods, which are constrained by narrow fields of view and predefined semantic categories, thereby hindering omnidirectional safety-aware perception for embodied agents in open-world environments. To overcome this, we propose the first purely vision-based, end-to-end framework for omnidirectional open-vocabulary 3D occupancy prediction. Our approach models the full 360° space using a polar-coordinate spiral topology and unifies geometric and semantic supervision to achieve consistent pixel–voxel–text representations. Key innovations include the novel Polar Spiral Mamba module for continuous omnidirectional spatial modeling, a gradient-free natural modality alignment mechanism, and a unified voxel representation that jointly encodes geometry and open-set semantics. The method achieves state-of-the-art performance on the QuadOcc and Human360Occ benchmarks, significantly enhancing cross-scene generalization and semantic scalability.
📝 Abstract
Understanding and reconstructing the 3D world through omnidirectional perception is an inevitable trend in the development of autonomous agents and embodied intelligence. However, existing 3D occupancy prediction methods are constrained by limited perspective inputs and predefined training distribution, making them difficult to apply to embodied agents that require comprehensive and safe perception of scenes in open world exploration. To address this, we present O3N, the first purely visual, end-to-end Omnidirectional Open-vocabulary Occupancy predictioN framework. O3N embeds omnidirectional voxels in a polar-spiral topology via the Polar-spiral Mamba (PsM) module, enabling continuous spatial representation and long-range context modeling across 360°. The Occupancy Cost Aggregation (OCA) module introduces a principled mechanism for unifying geometric and semantic supervision within the voxel space, ensuring consistency between the reconstructed geometry and the underlying semantic structure. Moreover, Natural Modality Alignment (NMA) establishes a gradient-free alignment pathway that harmonizes visual features, voxel embeddings, and text semantics, forming a consistent "pixel-voxel-text" representation triad. Extensive experiments on multiple models demonstrate that our method not only achieves state-of-the-art performance on QuadOcc and Human360Occ benchmarks but also exhibits remarkable cross-scene generalization and semantic scalability, paving the way toward universal 3D world modeling. The source code will be made publicly available at https://github.com/MengfeiD/O3N.