ET-Former: Efficient Triplane Deformable Attention for 3D Semantic Scene Completion From Monocular Camera

📅 2024-10-14
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses 3D semantic scene completion from monocular RGB images, proposing the first end-to-end framework that jointly generates semantic occupancy grids and estimates predictive uncertainty to support safe navigation. Methodologically, we introduce a novel triplane deformable attention mechanism to enhance geometric reasoning, and— for the first time in this task—integrate triplane encoding, monocular depth-aware implicit modeling, and a conditional variational autoencoder (CVAE) to jointly model semantics and uncertainty. Evaluated on the Semantic-KITTI test set, our method achieves an IoU of 51.49 (+6.78) and mIoU of 16.30 (+1.26), while requiring only 10.9 GB of GPU memory—setting new state-of-the-art performance and efficiency.

Technology Category

Application Category

📝 Abstract
We introduce ET-Former, a novel end-to-end algorithm for semantic scene completion using a single monocular camera. Our approach generates a semantic occupancy map from single RGB observation while simultaneously providing uncertainty estimates for semantic predictions. By designing a triplane-based deformable attention mechanism, our approach improves geometric understanding of the scene than other SOTA approaches and reduces noise in semantic predictions. Additionally, through the use of a Conditional Variational AutoEncoder (CVAE), we estimate the uncertainties of these predictions. The generated semantic and uncertainty maps will help formulate navigation strategies that facilitate safe and permissible decision making in the future. Evaluated on the Semantic-KITTI dataset, ET-Former achieves the highest Intersection over Union (IoU) and mean IoU (mIoU) scores while maintaining the lowest GPU memory usage, surpassing state-of-the-art (SOTA) methods. It improves the SOTA scores of IoU from 44.71 to 51.49 and mIoU from 15.04 to 16.30 on SeamnticKITTI test, with a notably low training memory consumption of 10.9 GB. Project page: https://github.com/jingGM/ET-Former.git.
Problem

Research questions and friction points this paper is trying to address.

Generates semantic occupancy map from single RGB image.
Improves geometric understanding and reduces semantic noise.
Estimates prediction uncertainties using Conditional Variational AutoEncoder.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Triplane-based deformable attention mechanism
Conditional Variational AutoEncoder for uncertainty
Low GPU memory usage with high accuracy