🤖 AI Summary
This work addresses the challenge of real-time, high-fidelity LiDAR scan re-rendering in large-scale urban environments. Methodologically, it introduces the first differentiable LiDAR-specific re-simulation framework, featuring a novel differentiable ray-casting model and range-image projection mechanism that eliminates geometric artifacts inherent in conventional local affine approximations. It further proposes a neural Gaussian representation to jointly model incidence-angle- and scene-dependent intensity attenuation and beam dropouts, integrated with dynamic instance decomposition. Built upon the Gaussian Splatting architecture, the framework simultaneously synthesizes depth, intensity, and beam-dropout maps. Evaluated on public large-scale urban datasets, our method achieves state-of-the-art rendering quality at significantly higher frame rates, outperforming both explicit mesh-based and implicit NeRF-based baselines across all metrics.
📝 Abstract
We present LiDAR-GS, a Gaussian Splatting (GS) method for real-time, high-fidelity re-simulation of LiDAR scans in public urban road scenes. Recent GS methods proposed for cameras have achieved significant advancements in real-time rendering beyond Neural Radiance Fields (NeRF). However, applying GS representation to LiDAR, an active 3D sensor type, poses several challenges that must be addressed to preserve high accuracy and unique characteristics. Specifically, LiDAR-GS designs a differentiable laser beam splatting, using range-view representation for precise surface splatting by projecting lasers onto micro cross-sections, effectively eliminating artifacts associated with local affine approximations. Furthermore, LiDAR-GS leverages Neural Gaussian Representation, which further integrate view-dependent clues, to represent key LiDAR properties that are influenced by the incident direction and external factors. Combining these practices with some essential adaptations, e.g., dynamic instances decomposition, LiDAR-GS succeeds in simultaneously re-simulating depth, intensity, and ray-drop channels, achieving state-of-the-art results in both rendering frame rate and quality on publically available large scene datasets when compared with the methods using explicit mesh or implicit NeRF. Our source code is publicly available at https://www.github.com/cqf7419/LiDAR-GS.