🤖 AI Summary
To address the vehicle localization challenge arising from large viewpoint discrepancies and cross-modal heterogeneity between aerial imagery and ground-level LiDAR, this paper proposes a bidirectional cross-modal attention fusion framework. First, LiDAR point clouds are projected onto a bird’s-eye view (BEV) representation; then, geometric and semantic features are jointly modeled via bidirectional cross-attention. A likelihood map decoder outputs probabilistic estimates of position and orientation, while InfoNCE contrastive learning constructs a unified, robust embedding space for cross-modal alignment. This work is the first to integrate bidirectional attention with contrastive learning for aerial–ground localization, significantly enhancing cross-modal matching capability. Evaluated on CARLA and KITTI benchmarks, the method reduces localization error by 63%, achieving sub-meter positioning accuracy and sub-degree orientation precision. It demonstrates strong generalization across both synthetic and real-world scenarios.
📝 Abstract
Aerial-ground localization is difficult due to large viewpoint and modality gaps between ground-level LiDAR and overhead imagery. We propose TransLocNet, a cross-modal attention framework that fuses LiDAR geometry with aerial semantic context. LiDAR scans are projected into a bird's-eye-view representation and aligned with aerial features through bidirectional attention, followed by a likelihood map decoder that outputs spatial probability distributions over position and orientation. A contrastive learning module enforces a shared embedding space to improve cross-modal alignment. Experiments on CARLA and KITTI show that TransLocNet outperforms state-of-the-art baselines, reducing localization error by up to 63% and achieving sub-meter, sub-degree accuracy. These results demonstrate that TransLocNet provides robust and generalizable aerial-ground localization in both synthetic and real-world settings.