🤖 AI Summary
Urban autonomous driving faces critical challenges including GNSS signal occlusion and multipath interference, high construction/maintenance costs of high-definition (HD) maps, and poor generalizability of standard-definition (SD) maps.
Method: This paper proposes a GNSS-free localization framework featuring a novel unified bird’s-eye-view (BEV) semantic map representation. It fuses multi-modal sensor data and performs BEV semantic segmentation to generate real-time semantic top-down views, then employs an explicit feature-matching mechanism for cross-scale alignment with HD/SD maps and robust pose estimation—avoiding end-to-end regression to enhance interpretability and resilience.
Contribution/Results: Evaluated on nuScenes and Argoverse, the method achieves state-of-the-art performance with sub-meter absolute localization accuracy. It demonstrates strong generalization across diverse urban scenes and maintains full compatibility with low-cost SD maps, significantly reducing dependency on expensive HD infrastructure.
📝 Abstract
Robust and accurate localization is critical for autonomous driving. Traditional GNSS-based localization methods suffer from signal occlusion and multipath effects in urban environments. Meanwhile, methods relying on high-definition (HD) maps are constrained by the high costs associated with the construction and maintenance of HD maps. Standard-definition (SD) maps-based methods, on the other hand, often exhibit unsatisfactory performance or poor generalization ability due to overfitting. To address these challenges, we propose SegLocNet, a multimodal GNSS-free localization network that achieves precise localization using bird's-eye-view (BEV) semantic segmentation. SegLocNet employs a BEV segmentation network to generate semantic maps from multiple sensor inputs, followed by an exhaustive matching process to estimate the vehicle's ego pose. This approach avoids the limitations of regression-based pose estimation and maintains high interpretability and generalization. By introducing a unified map representation, our method can be applied to both HD and SD maps without any modifications to the network architecture, thereby balancing localization accuracy and area coverage. Extensive experiments on the nuScenes and Argoverse datasets demonstrate that our method outperforms the current state-of-the-art methods, and that our method can accurately estimate the ego pose in urban environments without relying on GNSS, while maintaining strong generalization ability. Our code and pre-trained model will be released publicly.