RLPR: Radar-to-LiDAR Place Recognition via Two-Stage Asymmetric Cross-Modal Alignment for Autonomous Driving

πŸ“… 2026-03-09
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the significant degradation of LiDAR-based localization under adverse weather conditions and the lack of readily available radar maps despite radar’s robustness. To bridge this gap, the authors propose the RLPR framework, which introduces a task-driven, asymmetric cross-modal alignment mechanism between radar and LiDAR for the first time. Leveraging a dual-stream network, the method extracts sensor-agnostic structural features and employs a two-stage asymmetric cross-modal alignment strategy (TACMA), where a pre-trained radar branch guides the alignment of LiDAR features to enable reliable place recognition against LiDAR maps using radar inputs. The approach supports single-chip, scanning, and 4D radars, achieving state-of-the-art accuracy across four datasets and demonstrating exceptional zero-shot generalization across diverse radar types and environments.

Technology Category

Application Category

πŸ“ Abstract
All-weather autonomy is critical for autonomous driving, which necessitates reliable localization across diverse scenarios. While LiDAR place recognition is widely deployed for this task, its performance degrades in adverse weather. Conversely, radar-based methods, though weather-resilient, are hindered by the general unavailability of radar maps. To bridge this gap, radar-to-LiDAR place recognition, which localizes radar scans within existing LiDAR maps, has garnered increasing interest. However, extracting discriminative and generalizable features shared between modalities remains challenging, compounded by the scarcity of large-scale paired training data and the signal heterogeneity across radar types. In this work, we propose RLPR, a robust radar-to-LiDAR place recognition framework compatible with single-chip, scanning, and 4D radars. We first design a dual-stream network to extract structural features that abstract away from sensor-specific signal properties (e.g., Doppler or RCS). Subsequently, motivated by our task-specific asymmetry observation between radar and LiDAR, we introduce a two-stage asymmetric cross-modal alignment (TACMA) strategy, which leverages the pre-trained radar branch as a discriminative anchor to guide the alignment process. Experiments on four datasets demonstrate that RLPR achieves state-of-the-art recognition accuracy with strong zero-shot generalization capabilities.
Problem

Research questions and friction points this paper is trying to address.

radar-to-LiDAR place recognition
cross-modal alignment
autonomous driving
all-weather localization
signal heterogeneity
Innovation

Methods, ideas, or system contributions that make the work stand out.

radar-to-LiDAR place recognition
asymmetric cross-modal alignment
structural feature extraction
zero-shot generalization
dual-stream network
Zhangshuo Qi
Zhangshuo Qi
Beijing Institute of Technology
RoboticsIntelligent VehiclesPlace Recognition
J
Jingyi Xu
Shanghai Jiao Tong University
L
Luqi Cheng
Beijing Institute of Technology
S
Shichen Wen
Beijing Institute of Technology
G
Guangming Xiong
Beijing Institute of Technology