🤖 AI Summary
To address the demand for high-precision localization of mobile devices in 5G/6G scenarios—such as autonomous driving and augmented reality—this work proposes an unsupervised fusion localization framework that eliminates reliance on manual annotations. Methodologically, it jointly leverages geometric modeling and ray-tracing simulation, incorporating building map priors to construct LoS/NLoS propagation models. Crucially, it introduces optimal transport (OT) to automatically generate pseudo-labels, enabling self-consistent integration of model-driven and data-driven paradigms. Compared to purely learning-based methods (superior under LoS conditions) and purely model-based approaches (superior under NLoS), the proposed framework achieves accuracy comparable to fully supervised fingerprinting in complex mixed LoS/NLoS environments, while drastically reducing dependence on large-scale labeled datasets. This establishes a novel paradigm for lightweight, generalizable, and high-precision localization.
📝 Abstract
Accurate mobile device localization is critical for emerging 5G/6G applications such as autonomous vehicles and augmented reality. In this paper, we propose a unified localization method that integrates model-based and machine learning (ML)-based methods to reap their respective advantages by exploiting available map information. In order to avoid supervised learning, we generate training labels automatically via optimal transport (OT) by fusing geometric estimates with building layouts. Ray-tracing based simulations are carried out to demonstrate that the proposed method significantly improves positioning accuracy for both line-of-sight (LoS) users (compared to ML-based methods) and non-line-of-sight (NLoS) users (compared to model-based methods). Remarkably, the unified method is able to achieve competitive overall performance with the fully-supervised fingerprinting, while eliminating the need for cumbersome labeled data measurement and collection.