🤖 AI Summary
To address the challenges of place recognition under adverse weather conditions—caused by sparsity, high noise, and low resolution in 4D radar data—this paper proposes the first end-to-end radar-based pose estimation framework. Methodologically, we introduce a novel trajectory-guided deformable feature aggregation mechanism: radar-derived ego-velocity estimates are used to predict motion trajectories and correct temporal misalignment in bird’s-eye-view (BEV) features; this is combined with dynamic point filtering, BEV grid-based encoding, optical-flow-inspired feature alignment, and multi-scale spatiotemporal attention aggregation. Evaluated on a real-world automotive 4D radar dataset, our method achieves a 21.3% improvement in place recognition accuracy under dynamic scenes, significantly outperforming existing approaches. Extensive experiments demonstrate strong robustness and practicality in challenging conditions including rain, fog, and nighttime operation.
📝 Abstract
Place recognition is essential for achieving closed-loop or global positioning in autonomous vehicles and mobile robots. Despite recent advancements in place recognition using 2D cameras or 3D LiDAR, it remains to be seen how to use 4D radar for place recognition - an increasingly popular sensor for its robustness against adverse weather and lighting conditions. Compared to LiDAR point clouds, radar data are drastically sparser, noisier and in much lower resolution, which hampers their ability to effectively represent scenes, posing significant challenges for 4D radar-based place recognition. This work addresses these challenges by leveraging multi-modal information from sequential 4D radar scans and effectively extracting and aggregating spatio-temporal features.Our approach follows a principled pipeline that comprises (1) dynamic points removal and ego-velocity estimation from velocity property, (2) bird's eye view (BEV) feature encoding on the refined point cloud, (3) feature alignment using BEV feature map motion trajectory calculated by ego-velocity, (4) multi-scale spatio-temporal features of the aligned BEV feature maps are extracted and aggregated.Real-world experimental results validate the feasibility of the proposed method and demonstrate its robustness in handling dynamic environments. Source codes are available.