🤖 AI Summary
This work addresses the challenges in LiDAR-based novel view synthesis, where strong reliance on accurate poses, sparse point clouds, and lack of texture often lead to geometric holes and surface discontinuities. To overcome these limitations, the authors propose SG-NLF, a framework that introduces spectral priors into pose-free LiDAR NeRF for the first time. SG-NLF models scenes through a hybrid spectral-geometric representation and employs a confidence-aware graph based on feature compatibility to achieve global pose self-alignment. Additionally, adversarial learning is incorporated to enhance cross-frame consistency. Experimental results demonstrate that SG-NLF outperforms the current state-of-the-art methods by 35.8% in reconstruction quality and 68.8% in pose accuracy, with particularly pronounced advantages in low-frequency, complex scenes.
📝 Abstract
Neural Radiance Fields (NeRF) have shown remarkable success in image novel view synthesis (NVS), inspiring extensions to LiDAR NVS. However, most methods heavily rely on accurate camera poses for scene reconstruction. The sparsity and textureless nature of LiDAR data also present distinct challenges, leading to geometric holes and discontinuous surfaces. To address these issues, we propose SG-NLF, a pose-free LiDAR NeRF framework that integrates spectral information with geometric consistency. Specifically, we design a hybrid representation based on spectral priors to reconstruct smooth geometry. For pose optimization, we construct a confidence-aware graph based on feature compatibility to achieve global alignment. In addition, an adversarial learning strategy is introduced to enforce cross-frame consistency, thereby enhancing reconstruction quality. Comprehensive experiments demonstrate the effectiveness of our framework, especially in challenging low-frequency scenarios. Compared to previous state-of-the-art methods, SG-NLF improves reconstruction quality and pose accuracy by over 35.8% and 68.8%. Our work can provide a novel perspective for LiDAR view synthesis.