🤖 AI Summary
To address the localization challenge in large-scale urban scenes—where partial, incomplete textual descriptions exhibit only partial correspondence with 3D point cloud locations—this paper proposes an uncertainty-aware cross-modal localization method. Our approach introduces three key innovations: (1) the first application of a Cauchy Mixture Model (CMM) to explicitly model uncertainty in text–point cloud matching, serving as a robust cross-modal prior; (2) a spatial integration mechanism and an azimuth fusion module that jointly align semantic and geometric cues; and (3) cardinal-direction encoding combined with modality pre-alignment to enhance geometric discriminability of local descriptions. Evaluated on the KITTI-360Pose dataset, our method achieves state-of-the-art performance, significantly improving localization accuracy under partially correlated textual queries. The source code is publicly available.
📝 Abstract
The goal of point cloud localization based on linguistic description is to identify a 3D position using textual description in large urban environments, which has potential applications in various fields, such as determining the location for vehicle pickup or goods delivery. Ideally, for a textual description and its corresponding 3D location, the objects around the 3D location should be fully described in the text description. However, in practical scenarios, e.g., vehicle pickup, passengers usually describe only the part of the most significant and nearby surroundings instead of the entire environment. In response to this $ extbf{partially relevant}$ challenge, we propose $ extbf{CMMLoc}$, an uncertainty-aware $ extbf{C}$auchy-$ extbf{M}$ixture-$ extbf{M}$odel ($ extbf{CMM}$) based framework for text-to-point-cloud $ extbf{Loc}$alization. To model the uncertain semantic relations between text and point cloud, we integrate CMM constraints as a prior during the interaction between the two modalities. We further design a spatial consolidation scheme to enable adaptive aggregation of different 3D objects with varying receptive fields. To achieve precise localization, we propose a cardinal direction integration module alongside a modality pre-alignment strategy, helping capture the spatial relationships among objects and bringing the 3D objects closer to the text modality. Comprehensive experiments validate that CMMLoc outperforms existing methods, achieving state-of-the-art results on the KITTI360Pose dataset. Codes are available in this GitHub repository https://github.com/kevin301342/CMMLoc.