OPAL: Visibility-aware LiDAR-to-OpenStreetMap Place Recognition via Adaptive Radial Fusion

📅 2025-04-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high computational cost and poor real-time adaptability of LiDAR-based place recognition in large-scale outdoor environments—typically reliant on expensive 3D maps or aerial imagery—this paper proposes a lightweight and efficient approach leveraging dynamically updated OpenStreetMap (OSM) as a structured geometric prior, eliminating dense 3D reconstruction and remote sensing imagery. Our core contributions include: (1) a visibility-aware cross-modal masking mechanism and (2) an adaptive radial feature fusion strategy, jointly bridging the representation gap between sparse LiDAR point clouds and vectorized OSM. We further integrate multi-scale radial feature extraction with end-to-end contrastive learning. Evaluated on enhanced KITTI and KITTI-360 benchmarks, our method achieves a 15.98% improvement in top-1 recall at a 1-meter threshold and operates at 12× the inference speed of current state-of-the-art methods.

Technology Category

Application Category

📝 Abstract
LiDAR place recognition is a critical capability for autonomous navigation and cross-modal localization in large-scale outdoor environments. Existing approaches predominantly depend on pre-built 3D dense maps or aerial imagery, which impose significant storage overhead and lack real-time adaptability. In this paper, we propose OPAL, a novel network for LiDAR place recognition that leverages OpenStreetMap as a lightweight and up-to-date prior. Our key innovation lies in bridging the domain disparity between sparse LiDAR scans and structured OSM data through two carefully designed components: a cross-modal visibility mask that identifies maximal observable regions from both modalities to guide feature learning, and an adaptive radial fusion module that dynamically consolidates multiscale radial features into discriminative global descriptors. Extensive experiments on the augmented KITTI and KITTI-360 datasets demonstrate OPAL's superiority, achieving 15.98% higher recall at @1m threshold for top-1 retrieved matches while operating at 12x faster inference speeds compared to state-of-the-art approaches. Code and datasets are publicly available at: https://github.com/WHU-USI3DV/OPAL .
Problem

Research questions and friction points this paper is trying to address.

Bridges LiDAR and OpenStreetMap for place recognition
Reduces storage needs and improves real-time adaptability
Enhances accuracy and speed in cross-modal localization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses OpenStreetMap as lightweight prior
Cross-modal visibility mask guides learning
Adaptive radial fusion for global descriptors
🔎 Similar Papers
No similar papers found.