LR-SGS: Robust LiDAR-Reflectance-Guided Salient Gaussian Splatting for Self-Driving Scene Reconstruction

📅 2026-03-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing self-supervised scene reconstruction methods under high ego-motion and complex lighting conditions, which struggle to effectively integrate geometric and reflectance information from LiDAR point clouds with RGB images. The authors propose a structure-aware saliency-guided Gaussian representation that, for the first time, incorporates calibrated LiDAR reflectance as an illumination-invariant material channel within the 3D Gaussian Splatting framework. By leveraging saliency-guided Gaussian initialization and density control, the method enhances the reconstruction of edges and planar structures while achieving aligned boundaries across LiDAR and RGB modalities. Evaluated on the Waymo dataset, the approach achieves superior reconstruction quality with fewer Gaussians and shorter training time, yielding a 1.18 dB PSNR improvement over OmniRe under challenging lighting conditions.

Technology Category

Application Category

📝 Abstract
Recent 3D Gaussian Splatting (3DGS) methods have demonstrated the feasibility of self-driving scene reconstruction and novel view synthesis. However, most existing methods either rely solely on cameras or use LiDAR only for Gaussian initialization or depth supervision, while the rich scene information contained in point clouds, such as reflectance, and the complementarity between LiDAR and RGB have not been fully exploited, leading to degradation in challenging self-driving scenes, such as those with high ego-motion and complex lighting. To address these issues, we propose a robust and efficient LiDAR-reflectance-guided Salient Gaussian Splatting method (LR-SGS) for self-driving scenes, which introduces a structure-aware Salient Gaussian representation, initialized from geometric and reflectance feature points extracted from LiDAR and refined through a salient transform and improved density control to capture edge and planar structures. Furthermore, we calibrate LiDAR intensity into reflectance and attach it to each Gaussian as a lighting-invariant material channel, jointly aligned with RGB to enforce boundary consistency. Extensive experiments on the Waymo Open Dataset demonstrate that LR-SGS achieves superior reconstruction performance with fewer Gaussians and shorter training time. In particular, on Complex Lighting scenes, our method surpasses OmniRe by 1.18 dB PSNR.
Problem

Research questions and friction points this paper is trying to address.

3D Gaussian Splatting
LiDAR reflectance
self-driving scene reconstruction
complex lighting
multi-sensor fusion
Innovation

Methods, ideas, or system contributions that make the work stand out.

LiDAR reflectance
3D Gaussian Splatting
salient representation
multi-modal alignment
self-driving scene reconstruction
🔎 Similar Papers
No similar papers found.
Ziyu Chen
Ziyu Chen
Chonqing University
DCOPsMAS
Fan Zhu
Fan Zhu
Bayanat
3D computer visiondeep learning
H
Hui Zhu
Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei, China
D
Deyi Kong
Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei, China
X
Xinkai Kuang
Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei, China; University of Science and Technology of China, Hefei, China
Y
Yujia Zhang
Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei, China; University of Science and Technology of China, Hefei, China
C
Chunmao Jiang
Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei, China