TGP: Two-modal occupancy prediction with 3D Gaussian and sparse points for 3D Environment Awareness

📅 2025-03-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address spatial information loss in voxel-based methods and limited structural modeling capacity in point cloud–based approaches for 3D semantic occupancy prediction, this paper proposes a dual-modal representation integrating 3D Gaussian sets with sparse point clouds. We introduce the first collaborative modeling framework featuring an adaptive cross-modal fusion mechanism and layer-wise dynamic point cloud optimization, built upon a Transformer architecture, 3D Gaussian parameterization, query-driven semantic decoding, and multi-scale feature fusion. The method preserves high spatial localization accuracy while significantly enhancing voxel-level structural modeling capability. Evaluated on the Occ3DnuScenes benchmark, it achieves substantial IoU improvements over existing state-of-the-art methods, demonstrating superior geometric fidelity and semantic accuracy.

Technology Category

Application Category

📝 Abstract
3D semantic occupancy has rapidly become a research focus in the fields of robotics and autonomous driving environment perception due to its ability to provide more realistic geometric perception and its closer integration with downstream tasks. By performing occupancy prediction of the 3D space in the environment, the ability and robustness of scene understanding can be effectively improved. However, existing occupancy prediction tasks are primarily modeled using voxel or point cloud-based approaches: voxel-based network structures often suffer from the loss of spatial information due to the voxelization process, while point cloud-based methods, although better at retaining spatial location information, face limitations in representing volumetric structural details. To address this issue, we propose a dual-modal prediction method based on 3D Gaussian sets and sparse points, which balances both spatial location and volumetric structural information, achieving higher accuracy in semantic occupancy prediction. Specifically, our method adopts a Transformer-based architecture, taking 3D Gaussian sets, sparse points, and queries as inputs. Through the multi-layer structure of the Transformer, the enhanced queries and 3D Gaussian sets jointly contribute to the semantic occupancy prediction, and an adaptive fusion mechanism integrates the semantic outputs of both modalities to generate the final prediction results. Additionally, to further improve accuracy, we dynamically refine the point cloud at each layer, allowing for more precise location information during occupancy prediction. We conducted experiments on the Occ3DnuScenes dataset, and the experimental results demonstrate superior performance of the proposed method on IoU based metrics.
Problem

Research questions and friction points this paper is trying to address.

Improves 3D semantic occupancy prediction accuracy
Balances spatial and volumetric structural information
Enhances scene understanding for robotics and autonomous driving
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual-modal prediction using 3D Gaussian and sparse points
Transformer-based architecture for enhanced semantic occupancy prediction
Dynamic point cloud refinement for precise location information
🔎 Similar Papers
No similar papers found.
Mu Chen
Mu Chen
University of Technology Sydney (UTS)
video segmentationvideo understanding
Wenyu Chen
Wenyu Chen
Massachusetts Institute of Technology
optimizationstatistical learning
M
Mingchuan Yang
China Telecom Research Institute, Beijing 102200, China
Y
Yuan Zhang
China Telecom Research Institute, Beijing 102200, China
T
Tao Han
China Telecom Research Institute, Beijing 102200, China
X
Xinchi Li
China Telecom Research Institute, Beijing 102200, China
Y
Yunlong Li
China Telecom Research Institute, Beijing 102200, China
H
Huaici Zhao
Key Laboratory of Opto-Electronic Information Processing, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, China, and also with the University of Chinese Academy of Sciences, Beijing 100049, China