U-ViLAR: Uncertainty-Aware Visual Localization for Autonomous Driving via Differentiable Association and Registration

📅 2025-07-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the degraded robustness and accuracy of visual localization in urban environments due to severe GNSS signal attenuation, this paper proposes an uncertainty-aware adaptive visual localization framework. The method integrates bird’s-eye-view (BEV) feature extraction, differentiable feature association, and probabilistic registration optimization. It introduces a novel dual uncertainty modeling mechanism: perceptual uncertainty guides cross-modal feature matching, while localization uncertainty dynamically modulates coarse-to-fine alignment strategies, enabling multi-scale adaptive spatial alignment between HD maps and navigation maps. Evaluated on multiple public benchmarks, the framework achieves state-of-the-art performance. Extensive real-world validation on large-scale autonomous driving fleets confirms its high accuracy and strong robustness in complex urban scenarios.

Technology Category

Application Category

📝 Abstract
Accurate localization using visual information is a critical yet challenging task, especially in urban environments where nearby buildings and construction sites significantly degrade GNSS (Global Navigation Satellite System) signal quality. This issue underscores the importance of visual localization techniques in scenarios where GNSS signals are unreliable. This paper proposes U-ViLAR, a novel uncertainty-aware visual localization framework designed to address these challenges while enabling adaptive localization using high-definition (HD) maps or navigation maps. Specifically, our method first extracts features from the input visual data and maps them into Bird's-Eye-View (BEV) space to enhance spatial consistency with the map input. Subsequently, we introduce: a) Perceptual Uncertainty-guided Association, which mitigates errors caused by perception uncertainty, and b) Localization Uncertainty-guided Registration, which reduces errors introduced by localization uncertainty. By effectively balancing the coarse-grained large-scale localization capability of association with the fine-grained precise localization capability of registration, our approach achieves robust and accurate localization. Experimental results demonstrate that our method achieves state-of-the-art performance across multiple localization tasks. Furthermore, our model has undergone rigorous testing on large-scale autonomous driving fleets and has demonstrated stable performance in various challenging urban scenarios.
Problem

Research questions and friction points this paper is trying to address.

Visual localization in GNSS-degraded urban environments
Uncertainty-aware HD map alignment for autonomous driving
Robust feature association and registration under perception errors
Innovation

Methods, ideas, or system contributions that make the work stand out.

U-ViLAR uses uncertainty-aware visual localization
Maps visual features into Bird's-Eye-View space
Combines perceptual and localization uncertainty guidance
🔎 Similar Papers
No similar papers found.