LiDAR-GS++:Improving LiDAR Gaussian Reconstruction via Diffusion Priors

📅 2025-11-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address severe geometric inconsistency and artifacts in Gaussian Splatting (GS) reconstruction from single LiDAR sweeps during novel-view extrapolation, this paper proposes a diffusion-prior-augmented reconstruction framework. Our method first employs a conditional diffusion model guided by coarse extrapolated renderings to generate geometrically consistent, completed point clouds. Second, it introduces a knowledge distillation mechanism that transfers the learned diffusion prior into the GS parameters, thereby extending the effective reconstruction range. Crucially, the approach preserves real-time rendering capability while significantly improving global geometric consistency and detail fidelity under extrapolated viewpoints. Extensive experiments on multiple urban road LiDAR datasets demonstrate state-of-the-art performance for both interpolation and extrapolation tasks, consistently outperforming existing GS- and NeRF-based baselines.

Technology Category

Application Category

📝 Abstract
Recent GS-based rendering has made significant progress for LiDAR, surpassing Neural Radiance Fields (NeRF) in both quality and speed. However, these methods exhibit artifacts in extrapolated novel view synthesis due to the incomplete reconstruction from single traversal scans. To address this limitation, we present LiDAR-GS++, a LiDAR Gaussian Splatting reconstruction method enhanced by diffusion priors for real-time and high-fidelity re-simulation on public urban roads. Specifically, we introduce a controllable LiDAR generation model conditioned on coarsely extrapolated rendering to produce extra geometry-consistent scans and employ an effective distillation mechanism for expansive reconstruction. By extending reconstruction to under-fitted regions, our approach ensures global geometric consistency for extrapolative novel views while preserving detailed scene surfaces captured by sensors. Experiments on multiple public datasets demonstrate that LiDAR-GS++ achieves state-of-the-art performance for both interpolated and extrapolated viewpoints, surpassing existing GS and NeRF-based methods.
Problem

Research questions and friction points this paper is trying to address.

Addresses artifacts in LiDAR reconstruction from incomplete single scans
Improves novel view synthesis via diffusion priors for geometric consistency
Enhances real-time reconstruction quality for urban road environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Diffusion priors enhance Gaussian Splatting reconstruction
Generates geometry-consistent scans via controllable LiDAR model
Employs distillation for expansive reconstruction of under-fitted regions
🔎 Similar Papers
No similar papers found.
Qifeng Chen
Qifeng Chen
HKUST
Computational PhotographyImage SynthesisGenerative AIAutonomous DrivingEmbodied AI
J
Jiarun Liu
Unmanned Vehicle Dept., CaiNiao Inc., Alibaba Group
R
Rengan Xie
State Key Laboratory of CAD&CG Zhejiang University
T
Tao Tang
Sun Yat-sen University
S
Sicong Du
Unmanned Vehicle Dept., CaiNiao Inc., Alibaba Group
Yiru Zhao
Yiru Zhao
Alibaba DAMO Academy
Computer Vision
Y
Yuchi Huo
State Key Laboratory of CAD&CG Zhejiang University
S
Sheng Yang
Unmanned Vehicle Dept., CaiNiao Inc., Alibaba Group