🤖 AI Summary
Wireless channel modeling for communications and radar systems suffers from heavy reliance on high-quality labeled data, poor generalization, and limited physical interpretability. To address these challenges, this paper proposes a Sparse Bayesian Generative Modeling (SBGM) framework that explicitly incorporates physical priors. Specifically, it is the first to embed the inherent compressibility of wireless channels into a generative model, enabling online learning from compressed measurements. Physical constraints—derived from electromagnetic propagation principles—and parameterized channel representations are integrated to ensure model transparency and interpretability. Moreover, the method supports zero-shot transfer across antenna configurations and frequency bands without retraining. Experimental results demonstrate that SBGM achieves high-fidelity reconstruction of channel parameter distributions using only a small number of compressed measurements. This significantly reduces data acquisition and labeling overhead while markedly improving environmental adaptability and cross-scenario generalization performance.
📝 Abstract
Learning the site-specific distribution of the wireless channel within a particular environment of interest is essential to exploit the full potential of machine learning (ML) for wireless communications and radar applications. Generative modeling offers a promising framework to address this problem. However, existing approaches pose unresolved challenges, including the need for high-quality training data, limited generalizability, and a lack of physical interpretability. To address these issues, we combine the physics-related compressibility of wireless channels with generative modeling, in particular, sparse Bayesian generative modeling (SBGM), to learn the distribution of the underlying physical channel parameters. By leveraging the sparsity-inducing characteristics of SBGM, our methods can learn from compressed observations received by an access point (AP) during default online operation. Moreover, they are physically interpretable and generalize over system configurations without requiring retraining.