🤖 AI Summary
This paper addresses the extreme outlier ratio (>99.5%) challenge in semantic line-based visual relocalization, caused by one-to-many ambiguities in semantic line matching. We propose a lightweight, scene-agnostic single-image relocalization method. Our core contributions are: (1) the saturated consistency maximization (Sat-CM) paradigm, which overcomes the failure of conventional consistency maximization under ultra-high outlier ratios; and (2) a globally optimal PnL solver based on interval analysis, enabling robust and efficient pose estimation. The pipeline encompasses semantic 3D line map construction, 2D/3D semantic line matching, and depth-image-driven mapping. Evaluated on ScanNet++, our method significantly outperforms state-of-the-art approaches, achieving high accuracy and strong robustness while maintaining real-time performance.
📝 Abstract
This is the arxiv version for our paper submitted to IEEE/RSJ IROS 2025. We propose a scene-agnostic and light-weight visual relocalization framework that leverages semantically labeled 3D lines as a compact map representation. In our framework, the robot localizes itself by capturing a single image, extracting 2D lines, associating them with semantically similar 3D lines in the map, and solving a robust perspective-n-line problem. To address the extremely high outlier ratios~(exceeding 99.5%) caused by one-to-many ambiguities in semantic matching, we introduce the Saturated Consensus Maximization~(Sat-CM) formulation, which enables accurate pose estimation when the classic Consensus Maximization framework fails. We further propose a fast global solver to the formulated Sat-CM problems, leveraging rigorous interval analysis results to ensure both accuracy and computational efficiency. Additionally, we develop a pipeline for constructing semantic 3D line maps using posed depth images. To validate the effectiveness of our framework, which integrates our innovations in robust estimation and practical engineering insights, we conduct extensive experiments on the ScanNet++ dataset.