🤖 AI Summary
Deep learning models for wireless localization suffer from high computational overhead and poor deployability on mobile devices due to high-dimensional channel features.
Method: This paper proposes P-NN, a lightweight neural network for localization based on minimal descriptive features. We introduce a novel information-theoretic, adaptive mechanism for selecting optimal feature space dimensionality, quantifying feature discriminability via mutual information and entropy. Instead of using the full power-delay profile (PDP), P-NN extracts only two time-domain features—the peak power value and its arrival time—from the time-series power spectrum.
Contribution/Results: The resulting architecture achieves high localization accuracy while drastically reducing feature dimensionality and computational complexity. Experiments show that P-NN matches the accuracy of PDP-based baselines while compressing feature dimensionality by over 90% and reducing inference latency by an order of magnitude—yielding a superior accuracy–efficiency trade-off.
📝 Abstract
A recent line of research has been investigating deep learning approaches to wireless positioning (WP). Although these WP algorithms have demonstrated high accuracy and robust performance against diverse channel conditions, they also have a major drawback: they require processing high-dimensional features, which can be prohibitive for mobile applications. In this work, we design a positioning neural network (P-NN) that substantially reduces the complexity of deep learning-based WP through carefully crafted minimum description features. Our feature selection is based on maximum power measurements and their temporal locations to convey information needed to conduct WP. We also develop a novel methodology for adaptively selecting the size of feature space, which optimizes over balancing the expected amount of useful information and classification capability, quantified using information-theoretic measures on the signal bin selection. Numerical results show that P-NN achieves a significant advantage in performance-complexity tradeoff over deep learning baselines that leverage the full power delay profile (PDP).