UniLGL: Learning Uniform Place Recognition for FOV-limited/Panoramic LiDAR Global Localization

๐Ÿ“… 2025-07-16
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing LiDAR-based global localization (LGL) methods rely heavily on geometric features or assume homogeneous sensors, failing to simultaneously ensure consistency across spatial layout, material properties, and sensor modalities. Method: We propose UniLGLโ€”the first LGL framework to replace translation equivariance with viewpoint invariance, enabling unified descriptor learning across heterogeneous LiDARs (e.g., narrow-FOV vs. panoramic). It introduces a multi-BEV fusion network that encodes point clouds into spatial-intensity dual-channel birdโ€™s-eye-view (BEV) images for end-to-end robust feature extraction, and a registration-free SE(3) pose estimator that jointly leverages 2Dโ€“3D feature mapping to solve global pose. Contribution/Results: UniLGL achieves state-of-the-art performance on real-world port and forest benchmarks. It has been successfully deployed on large-scale trucks and micro aerial vehicles, enabling high-precision localization, mapping, and multi-robot collaborative exploration.

Technology Category

Application Category

๐Ÿ“ Abstract
Existing LGL methods typically consider only partial information (e.g., geometric features) from LiDAR observations or are designed for homogeneous LiDAR sensors, overlooking the uniformity in LGL. In this work, a uniform LGL method is proposed, termed UniLGL, which simultaneously achieves spatial and material uniformity, as well as sensor-type uniformity. The key idea of the proposed method is to encode the complete point cloud, which contains both geometric and material information, into a pair of BEV images (i.e., a spatial BEV image and an intensity BEV image). An end-to-end multi-BEV fusion network is designed to extract uniform features, equipping UniLGL with spatial and material uniformity. To ensure robust LGL across heterogeneous LiDAR sensors, a viewpoint invariance hypothesis is introduced, which replaces the conventional translation equivariance assumption commonly used in existing LPR networks and supervises UniLGL to achieve sensor-type uniformity in both global descriptors and local feature representations. Finally, based on the mapping between local features on the 2D BEV image and the point cloud, a robust global pose estimator is derived that determines the global minimum of the global pose on SE(3) without requiring additional registration. To validate the effectiveness of the proposed uniform LGL, extensive benchmarks are conducted in real-world environments, and the results show that the proposed UniLGL is demonstratively competitive compared to other State-of-the-Art LGL methods. Furthermore, UniLGL has been deployed on diverse platforms, including full-size trucks and agile Micro Aerial Vehicles (MAVs), to enable high-precision localization and mapping as well as multi-MAV collaborative exploration in port and forest environments, demonstrating the applicability of UniLGL in industrial and field scenarios.
Problem

Research questions and friction points this paper is trying to address.

Achieves uniform LiDAR global localization across sensor types
Encodes complete point cloud into BEV images for feature extraction
Ensures robust pose estimation without additional registration steps
Innovation

Methods, ideas, or system contributions that make the work stand out.

Encodes point cloud into spatial and intensity BEV images
Uses multi-BEV fusion for uniform feature extraction
Introduces viewpoint invariance for sensor-type uniformity
๐Ÿ”Ž Similar Papers
No similar papers found.
Hongming Shen
Hongming Shen
Nanyang Technological University
SLAMSensor FusionAerial Robotics
X
Xun Chen
School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798
Y
Yulin Hui
School of Electrical and Information Engineering, Tianjin University, Tianjin, China 300072
Z
Zhenyu Wu
School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798
W
Wei Wang
School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798
Q
Qiyang Lyu
School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798
Tianchen Deng
Tianchen Deng
Shanghai Jiao Tong University
RoboticsComputer Vision
Danwei Wang
Danwei Wang
Professor, Nanyang Technological University
RoboticsControl EngineeringFault Diagnosis