A Spatiotemporal Approach to Tri-Perspective Representation for 3D Semantic Occupancy Prediction

๐Ÿ“… 2024-01-24
๐Ÿ“ˆ Citations: 1
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing vision-based 3D semantic occupancy prediction methods predominantly rely on static spatial modeling (e.g., TPV), neglecting temporal dynamics and thereby limiting scene understanding. To address this, we propose S2TPVFormerโ€”the first framework to explicitly incorporate temporal modeling into the TPV paradigm. Specifically, we design a Temporal Cross-View Hybrid Attention (TCVHA) mechanism to achieve spatiotemporally consistent alignment and fusion across the three canonical TPV representations. Furthermore, we introduce Spatiotemporal-Synchronized TPV Embeddings (S2TPV) to jointly encode spatial structure and temporal evolution. Evaluated on nuScenes, our method achieves a +4.1% improvement in 3D semantic occupancy mIoU over the TPVFormer baseline, demonstrating substantial gains in accuracy and robustness. S2TPVFormer establishes a new paradigm for vision-centric, real-time, and temporally aware 3D scene understanding.

Technology Category

Application Category

๐Ÿ“ Abstract
Holistic understanding and reasoning in 3D scenes are crucial for the success of autonomous driving systems. The evolution of 3D semantic occupancy prediction as a pretraining task for autonomous driving and robotic applications captures finer 3D details compared to traditional 3D detection methods. Vision-based 3D semantic occupancy prediction is increasingly overlooked in favor of LiDAR-based approaches, which have shown superior performance in recent years. However, we present compelling evidence that there is still potential for enhancing vision-based methods. Existing approaches predominantly focus on spatial cues such as tri-perspective view (TPV) embeddings, often overlooking temporal cues. This study introduces S2TPVFormer, a spatiotemporal transformer architecture designed to predict temporally coherent 3D semantic occupancy. By introducing temporal cues through a novel Temporal Cross-View Hybrid Attention mechanism (TCVHA), we generate Spatiotemporal TPV (S2TPV) embeddings that enhance the prior process. Experimental evaluations on the nuScenes dataset demonstrate a significant +4.1% of absolute gain in mean Intersection over Union (mIoU) for 3D semantic occupancy compared to baseline TPVFormer, validating the effectiveness of S2TPVFormer in advancing 3D scene perception.
Problem

Research questions and friction points this paper is trying to address.

Enhance vision-based 3D semantic occupancy prediction
Integrate temporal cues with spatial embeddings
Improve autonomous driving systems' 3D scene understanding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Spatiotemporal transformer architecture
Temporal Cross-View Hybrid Attention
S2TPV embeddings enhancement
๐Ÿ”Ž Similar Papers
No similar papers found.
S
Sathira Silva
University of Peradeniya, Peradeniya 20400, Sri Lanka
S
Savindu Wannigama
University of Peradeniya, Peradeniya 20400, Sri Lanka
R
Roshan Ragel
University of Peradeniya, Peradeniya 20400, Sri Lanka
G
Gihan Chanaka Jayatilaka
University of Maryland, College Park, MD 20742, USA