🤖 AI Summary
This work addresses the challenge of accurately modeling rapidly time-varying air-to-ground channels induced by the high-speed motion of low Earth orbit (LEO) satellites, particularly at low elevation angles where existing models fall short due to their neglect of real-world geographical effects such as terrain blockage and vegetation absorption. By integrating digital elevation models and land cover data with ray tracing to identify line-of-sight/non-line-of-sight links and reflection paths, the study incorporates diffraction, vegetation attenuation, and atmospheric loss models to rigorously quantify environmental impacts on signal propagation. Furthermore, it proposes a generative diffusion-based framework to learn the mapping from environmental context to channel characteristics, enabling, for the first time, physically accurate and scalable real-time channel prediction. Experiments demonstrate that the method significantly outperforms conventional statistical models on both cellular and LEO satellite measurement datasets, achieving precise channel state prediction for arbitrary satellite–ground geometries.
📝 Abstract
The fast motion of Low Earth Orbit (LEO) satellites causes the propagation channel to vary rapidly, and its behavior is strongly shaped by the surrounding environment, especially at low elevation angles where signals are highly susceptible to terrain blockage and other environmental effects. Existing studies mostly rely on assumed statistical channel distributions and therefore ignore the influence of the actual geographic environment. In this paper, we propose an environment-aware channel modeling method for air-to-ground wireless links. We leverage real environmental data, including digital elevation models (DEMs) and land cover information, together with ray tracing (RT) to determine whether a link is line-of-sight (LOS) or non-line-of-sight (NLOS) and to identify possible reflection paths of the signal. The resulting obstruction and reflection profiles are then combined with models of diffraction loss, vegetation absorption, and atmospheric attenuation to quantitatively characterize channel behavior in realistic geographic environments. Since RT is computationally intensive, we use RT-generated samples and environmental features to train a scalable diffusion model that can efficiently predict channel performance for arbitrary satellite and ground terminal positions, thereby supporting real-time decision-making. In the experiments, we validate the proposed model with measurement data from both cellular and LEO satellite links, demonstrating its effectiveness in realistic environments.