LightLoc: Learning Outdoor LiDAR Localization at Light Speed

📅 2025-03-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing outdoor LiDAR localization methods rely on scene-coordinate regression, achieving high accuracy but requiring days of training and lacking rapid adaptability to new scenes—thus failing to meet real-time requirements in autonomous driving and UAV applications. This paper proposes a novel framework comprising sample-classification-guided regression and confidence-based dynamic frame decimation: the former enhances feature discriminability via joint classification-regression learning, while the latter dynamically selects key frames using pose estimation confidence to compress training data. Our approach achieves, for the first time, minute-scale model adaptation to unseen scenes—accelerating training by 50×—and is integrated into a SLAM pipeline to suppress pose drift accumulation. Evaluated on large-scale outdoor benchmarks, it attains state-of-the-art accuracy and enables real-time deployment on edge devices.

Technology Category

Application Category

📝 Abstract
Scene coordinate regression achieves impressive results in outdoor LiDAR localization but requires days of training. Since training needs to be repeated for each new scene, long training times make these methods impractical for time-sensitive applications, such as autonomous driving, drones, and robotics. We identify large coverage areas and vast data in large-scale outdoor scenes as key challenges that limit fast training. In this paper, we propose LightLoc, the first method capable of efficiently learning localization in a new scene at light speed. LightLoc introduces two novel techniques to address these challenges. First, we introduce sample classification guidance to assist regression learning, reducing ambiguity from similar samples and improving training efficiency. Second, we propose redundant sample downsampling to remove well-learned frames during training, reducing training time without compromising accuracy. Additionally, the fast training and confidence estimation capabilities of sample classification enable its integration into SLAM, effectively eliminating error accumulation. Extensive experiments on large-scale outdoor datasets demonstrate that LightLoc achieves state-of-the-art performance with a 50x reduction in training time than existing methods. Our code is available at https://github.com/liw95/LightLoc.
Problem

Research questions and friction points this paper is trying to address.

Reducing long training time for outdoor LiDAR localization
Addressing challenges in large-scale outdoor scene coverage
Improving training efficiency without compromising localization accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sample classification guides regression learning
Redundant sample downsampling cuts training time
Fast training integrates into SLAM systems
🔎 Similar Papers
No similar papers found.
W
Wen Li
Fujian Key Laboratory of Sensing and Computing for Smart Cities, Xiamen University; Key Laboratory of Multimedia Trusted Perception and Efficient Computing, Ministry of Education of China, Xiamen University
C
Chen Liu
Fujian Key Laboratory of Sensing and Computing for Smart Cities, Xiamen University; Key Laboratory of Multimedia Trusted Perception and Efficient Computing, Ministry of Education of China, Xiamen University
Shangshu Yu
Shangshu Yu
Nanyang Technological University
3D Computer VisionLiDAR LocalizationDepth EstimationPose Estimation
Dunqiang Liu
Dunqiang Liu
Xiamen University
LiDAR LocalizationMulti-modal Learning
Yin Zhou
Yin Zhou
GAC R&D Center
S
Siqi Shen
Fujian Key Laboratory of Sensing and Computing for Smart Cities, Xiamen University; Key Laboratory of Multimedia Trusted Perception and Efficient Computing, Ministry of Education of China, Xiamen University
Chenglu Wen
Chenglu Wen
Professor of Xiamen University
3D visionpoint cloudsmobile mappingrobotics
C
Cheng Wang
Fujian Key Laboratory of Sensing and Computing for Smart Cities, Xiamen University; Key Laboratory of Multimedia Trusted Perception and Efficient Computing, Ministry of Education of China, Xiamen University