Self-supervised Multi-future Occupancy Forecasting for Autonomous Driving

📅 2024-07-30
🏛️ arXiv.org
📈 Citations: 4
Influential: 0
📄 PDF
🤖 AI Summary
Existing LiDAR occupancy prediction methods suffer from two key limitations: deterministic modeling fails to capture environmental stochasticity, and they inadequately fuse multimodal inputs—such as RGB images, high-definition maps, and planning trajectories. This paper proposes a self-supervised multi-future occupancy prediction framework that models uncertainty in latent space. Our core contributions are: (1) the first latent-space stochastic occupancy prediction paradigm; (2) a unified feature fusion mechanism supporting diverse multimodal conditional inputs; and (3) a dual-path decoder architecture—comprising a single-step decoder for real-time inference and a diffusion-enhanced batch decoder for temporal consistency. Evaluated on nuScenes and Waymo Open Dataset, our method achieves state-of-the-art performance, significantly mitigating compression artifacts and motion discontinuities. Both qualitative and quantitative results demonstrate consistent improvements across all major metrics.

Technology Category

Application Category

📝 Abstract
Environment prediction frameworks are critical for the safe navigation of autonomous vehicles (AVs) in dynamic settings. LiDAR-generated occupancy grid maps (L-OGMs) offer a robust bird's-eye view for the scene representation, enabling self-supervised joint scene predictions while exhibiting resilience to partial observability and perception detection failures. Prior approaches have focused on deterministic L-OGM prediction architectures within the grid cell space. While these methods have seen some success, they frequently produce unrealistic predictions and fail to capture the stochastic nature of the environment. Additionally, they do not effectively integrate additional sensor modalities present in AVs. Our proposed framework, Latent Occupancy Prediction (LOPR), performs stochastic L-OGM prediction in the latent space of a generative architecture and allows for conditioning on RGB cameras, maps, and planned trajectories. We decode predictions using either a single-step decoder, which provides high-quality predictions in real-time, or a diffusion-based batch decoder, which can further refine the decoded frames to address temporal consistency issues and reduce compression losses. Our experiments on the nuScenes and Waymo Open datasets show that all variants of our approach qualitatively and quantitatively outperform prior approaches.
Problem

Research questions and friction points this paper is trying to address.

Predicting stochastic LiDAR occupancy grids for autonomous driving
Integrating multi-sensor data like RGB cameras and maps
Improving temporal consistency and reducing prediction losses
Innovation

Methods, ideas, or system contributions that make the work stand out.

Stochastic L-OGM prediction in latent space
Integrates RGB cameras, maps, and trajectories
Uses single-step or diffusion-based decoders
🔎 Similar Papers
No similar papers found.