🤖 AI Summary
Existing LiDAR occupancy grid prediction methods predominantly rely on deterministic, grid-level optimization, often yielding physically implausible and scene-inconsistent artifacts that compromise safety-critical autonomous navigation. To address this, we propose the first latent-space disentangled generative occupancy prediction framework: it decouples representation learning from stochastic prediction, explicitly modeling uncertainty in the latent space while supporting multimodal conditional inputs—including RGB images and high-definition maps. Our approach employs a hybrid VAE-GAN architecture, enabling end-to-end training and zero-shot cross-platform transfer. Evaluated on NuScenes, Waymo Open Dataset, and a proprietary real-world vehicle dataset, our method achieves state-of-the-art performance, significantly improving prediction fidelity, physical plausibility, and scene consistency—key requirements for robust autonomous driving systems.
📝 Abstract
Environment prediction frameworks are integral for autonomous vehicles, enabling safe navigation in dynamic environments. LiDAR generated occupancy grid maps (L-OGMs) offer a robust bird's eye-view scene representation that facilitates joint scene predictions without relying on manual labeling unlike commonly used trajectory prediction frameworks. Prior approaches have optimized deterministic L-OGM prediction architectures directly in grid cell space. While these methods have achieved some degree of success in prediction, they occasionally grapple with unrealistic and incorrect predictions. We claim that the quality and realism of the forecasted occupancy grids can be enhanced with the use of generative models. We propose a framework that decouples occupancy prediction into: representation learning and stochastic prediction within the learned latent space. Our approach allows for conditioning the model on other available sensor modalities such as RGB-cameras and high definition maps. We demonstrate that our approach achieves state-of-the-art performance and is readily transferable between different robotic platforms on the real-world NuScenes, Waymo Open, and a custom dataset we collected on an experimental vehicle platform.