CSCPR: Cross-Source-Context Indoor RGB-D Place Recognition

📅 2024-07-24
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses place recognition in cross-source, cross-scale RGB-D indoor scenes. We propose an end-to-end unified framework that models global and local semantic structure via Context-of-Clusters (CoCs). Our method introduces a dual-module re-ranking mechanism: Self-Context Clusters (SCC) for intra-scene contextual refinement and Cross-Source Context Clusters (CSCC) for inter-source alignment. Notably, we are the first to extend CoCs to RGB-D point clouds, enabling robust feature alignment and local matching across heterogeneous devices and resolutions. To facilitate evaluation, we introduce two new benchmarks—ScanNetIPR and ARKitIPR—designed for cross-source, cross-scale place recognition. Experiments demonstrate substantial improvements: +29.27% Recall@1 on ScanNet-PR and up to +43.24% on our new benchmarks, significantly outperforming state-of-the-art methods.

Technology Category

Application Category

📝 Abstract
We extend our previous work, PoCo, and present a new algorithm, Cross-Source-Context Place Recognition (CSCPR), for RGB-D indoor place recognition that integrates global retrieval and reranking into an end-to-end model and keeps the consistency of using Context-of-Clusters (CoCs) for feature processing. Unlike prior approaches that primarily focus on the RGB domain for place recognition reranking, CSCPR is designed to handle the RGB-D data. We apply the CoCs to handle cross-sourced and cross-scaled RGB-D point clouds and introduce two novel modules for reranking: the Self-Context Cluster (SCC) and the Cross Source Context Cluster (CSCC), which enhance feature representation and match query-database pairs based on local features, respectively. We also release two new datasets, ScanNetIPR and ARKitIPR. Our experiments demonstrate that CSCPR significantly outperforms state-of-the-art models on these datasets by at least 29.27% in Recall@1 on the ScanNet-PR dataset and 43.24% in the new datasets. Code and datasets will be released.
Problem

Research questions and friction points this paper is trying to address.

Indoor localization
Color-depth image recognition
Environmental robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

CSCPR
RGB-D Imaging
Contextual Clustering
🔎 Similar Papers
No similar papers found.