SplatAD: Real-Time Lidar and Camera Rendering with 3D Gaussian Splatting for Autonomous Driving

📅 2024-11-25
🏛️ arXiv.org
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
To address the slow real-time rendering of Neural Radiance Fields (NeRF) for multi-sensor (camera + LiDAR) simulation in autonomous driving—and the limitation of existing 3D Gaussian Splatting (3DGS) frameworks, which support only RGB modalities—this work proposes SplatAD, the first 3DGS framework enabling real-time, dual-modality neural rendering for both cameras and LiDAR. Methodologically, SplatAD integrates sensor-specific physical modeling (e.g., rolling shutter, laser intensity attenuation, ray dropout), perception-aware Gaussian attribute optimization, and adaptive density control, all accelerated via GPU-based rasterization. Evaluated on three autonomous driving datasets, SplatAD achieves PSNR gains of +2 dB (novel-view synthesis) and +3 dB (geometric reconstruction) over NeRF, while attaining >30 FPS rendering speed—10× faster than NeRF. To our knowledge, this is the first approach to achieve real-time, physically consistent dual-modality neural rendering.

Technology Category

Application Category

📝 Abstract
Ensuring the safety of autonomous robots, such as self-driving vehicles, requires extensive testing across diverse driving scenarios. Simulation is a key ingredient for conducting such testing in a cost-effective and scalable way. Neural rendering methods have gained popularity, as they can build simulation environments from collected logs in a data-driven manner. However, existing neural radiance field (NeRF) methods for sensor-realistic rendering of camera and lidar data suffer from low rendering speeds, limiting their applicability for large-scale testing. While 3D Gaussian Splatting (3DGS) enables real-time rendering, current methods are limited to camera data and are unable to render lidar data essential for autonomous driving. To address these limitations, we propose SplatAD, the first 3DGS-based method for realistic, real-time rendering of dynamic scenes for both camera and lidar data. SplatAD accurately models key sensor-specific phenomena such as rolling shutter effects, lidar intensity, and lidar ray dropouts, using purpose-built algorithms to optimize rendering efficiency. Evaluation across three autonomous driving datasets demonstrates that SplatAD achieves state-of-the-art rendering quality with up to +2 PSNR for NVS and +3 PSNR for reconstruction while increasing rendering speed over NeRF-based methods by an order of magnitude. See https://research.zenseact.com/publications/splatad/ for our project page.
Problem

Research questions and friction points this paper is trying to address.

Real-time rendering for autonomous driving scenarios
Combining camera and lidar data in 3D Gaussian Splatting
Improving rendering speed and quality over NeRF methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Real-time rendering for camera and lidar data
3D Gaussian Splatting for dynamic scenes
Optimized algorithms for sensor-specific phenomena
🔎 Similar Papers