Learning Point Correspondences In Radar 3D Point Clouds For Radar-Inertial Odometry

📅 2025-06-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing odometry methods struggle to robustly establish inter-frame point correspondences for low-cost FMCW radar–generated 3D point clouds, which are typically noisy, sparse, and unstructured. To address this, we propose a self-supervised Transformer-based framework that leverages attention mechanisms to model long-range pairwise relationships among points. Instead of relying on manual annotations or indirect odometry supervision, we introduce a set-based multi-label classification loss that directly optimizes correspondence prediction. Point matches are solved via linear assignment, and inertial measurements from an IMU are tightly fused to enable radar-inertial odometry estimation. Evaluated on real-world UAV flight data and the Coloradar dataset, our method achieves 14% and 19% improvements in position estimation accuracy, respectively, demonstrating significantly enhanced robustness and precision for pose estimation under low-quality radar point cloud conditions.

Technology Category

Application Category

📝 Abstract
Using 3D point clouds in odometry estimation in robotics often requires finding a set of correspondences between points in subsequent scans. While there are established methods for point clouds of sufficient quality, state-of-the-art still struggles when this quality drops. Thus, this paper presents a novel learning-based framework for predicting robust point correspondences between pairs of noisy, sparse and unstructured 3D point clouds from a light-weight, low-power, inexpensive, consumer-grade System-on-Chip (SoC) Frequency Modulated Continuous Wave (FMCW) radar sensor. Our network is based on the transformer architecture which allows leveraging the attention mechanism to discover pairs of points in consecutive scans with the greatest mutual affinity. The proposed network is trained in a self-supervised way using set-based multi-label classification cross-entropy loss, where the ground-truth set of matches is found by solving the Linear Sum Assignment (LSA) optimization problem, which avoids tedious hand annotation of the training data. Additionally, posing the loss calculation as multi-label classification permits supervising on point correspondences directly instead of on odometry error, which is not feasible for sparse and noisy data from the SoC radar we use. We evaluate our method with an open-source state-of-the-art Radar-Inertial Odometry (RIO) framework in real-world Unmanned Aerial Vehicle (UAV) flights and with the widely used public Coloradar dataset. Evaluation shows that the proposed method improves the position estimation accuracy by over 14 % and 19 % on average, respectively. The open source code and datasets can be found here: https://github.com/aau-cns/radar_transformer.
Problem

Research questions and friction points this paper is trying to address.

Predicting point correspondences in noisy radar 3D point clouds
Improving odometry accuracy with sparse, unstructured radar data
Self-supervised learning for radar-inertial odometry without manual annotation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transformer-based learning for radar point clouds
Self-supervised training with LSA optimization
Multi-label classification for direct correspondence supervision
🔎 Similar Papers
No similar papers found.