🤖 AI Summary
To address the “position → multi-antenna, multi-band channel response” mapping challenge in integrated wireless localization and channel modeling—where conventional implicit neural representations (INRs) suffer from spectral bias and fail to capture wavelength-scale rapid variations—this paper proposes a propagation-model-driven INR architecture. By embedding electromagnetic propagation priors directly into the network structure, the method learns only low-frequency sparse correction terms that adaptively activate high-frequency dictionary atoms, thereby achieving both high accuracy and strong physical interpretability. The approach synergistically integrates model-guided learning, sparse dictionary learning, and multi-band signal processing. Evaluated on synthetic data, it significantly outperforms classical INRs in channel prediction accuracy while simultaneously improving physical consistency. Its interpretability is further validated via approximate channel modeling, demonstrating explicit correspondence between learned components and underlying wave propagation physics.
📝 Abstract
Years of study of the propagation channel showed a close relation between a location and the associated communication channel response. The use of a neural network to learn the location-to-channel mapping can therefore be envisioned. The Implicit Neural Representation (INR) literature showed that classical neural architecture are biased towards learning low-frequency content, making the location-to-channel mapping learning a non-trivial problem. Indeed, it is well known that this mapping is a function rapidly varying with the location, on the order of the wavelength. This paper leverages the model-based machine learning paradigm to derive a problem-specific neural architecture from a propagation channel model. The resulting architecture efficiently overcomes the spectral-bias issue. It only learns low-frequency sparse correction terms activating a dictionary of high-frequency components. The proposed architecture is evaluated against classical INR architectures on realistic synthetic data, showing much better accuracy. Its mapping learning performance is explained based on the approximated channel model, highlighting the explainability of the model-based machine learning paradigm.