🤖 AI Summary
To address the high computational overhead and poor real-time performance caused by high-dimensional features in wireless localization, this paper proposes an information-theoretic lightweight localization method. We design a dynamic feature selection mechanism guided by the Minimum Description Length (MDL) principle, using only the strongest-power measurements and temporal position cues to construct a minimal input. The proposed P-NN model jointly models sparse power-delay profiles (PDPs) and measurement matrices via dual-channel representation, and incorporates a self-attention layer to enhance discriminability of critical features under low signal-to-noise ratio (SNR). Experimental results show that P-NN achieves high accuracy while significantly reducing complexity: it reduces localization error by over 35% and inference latency by approximately 60% compared to full-PDP deep learning baselines under low-SNR conditions. This work introduces the first MDL-guided dynamic feature adaptation mechanism and sparse PDP compression modeling, establishing a new paradigm for high-accuracy wireless localization in resource-constrained environments.
📝 Abstract
Recently, deep learning approaches have provided solutions to difficult problems in wireless positioning (WP). Although these WP algorithms have attained excellent and consistent performance against complex channel environments, the computational complexity coming from processing high-dimensional features can be prohibitive for mobile applications. In this work, we design a novel positioning neural network (P-NN) that utilizes the minimum description features to substantially reduce the complexity of deep learning-based WP. P-NN’s feature selection strategy is based on maximum power measurements and their temporal locations to convey information needed to conduct WP. We improve P-NN’s learning ability by intelligently processing two different types of inputs: sparse image and measurement matrices. Specifically, we implement a self-attention layer to reinforce the training ability of our network. We also develop a technique to adapt feature space size, optimizing over the expected information gain and the classification capability quantified with information-theoretic measures on signal bin selection. Numerical results show that P-NN achieves a significant advantage in performance-complexity tradeoff over deep learning baselines that leverage the full power delay profile (PDP). In particular, we find that P-NN achieves a large improvement in performance for low SNR, as unnecessary measurements are discarded in our minimum description features.