Energy-Efficient Satellite IoT Optical Downlinks Using Weather-Adaptive Reinforcement Learning

📅 2025-01-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the low energy efficiency and unstable packet delivery ratio (PDR) of satellite-based optical downlinks in cloudy and rainy weather, this paper proposes a weather-aware deep reinforcement learning (DRL) scheduling method. We innovatively integrate real-time weather forecasts into a Deep Q-Network (DQN) framework, enabling dynamic adaptation of optical link decisions to time-varying cloud coverage while guaranteeing target PDR. Experimental results demonstrate that our approach achieves a significant improvement in median energy efficiency over static-threshold and optimal-cloud-threshold baselines—without compromising PDR—whereas the baselines improve energy efficiency only at the cost of reduced PDR. This work establishes a novel paradigm for meteorology-driven, joint communication-energy optimization, advancing high-reliability, low-power space-to-ground optical communications.

Technology Category

Application Category

📝 Abstract
Internet of Things (IoT) devices have become increasingly ubiquitous with applications not only in urban areas but remote areas as well. These devices support industries such as agriculture, forestry, and resource extraction. Due to the device location being in remote areas, satellites are frequently used to collect and deliver IoT device data to customers. As these devices become increasingly advanced and numerous, the amount of data produced has rapidly increased potentially straining the ability for radio frequency (RF) downlink capacity. Free space optical communications with their wide available bandwidths and high data rates are a potential solution, but these communication systems are highly vulnerable to weather-related disruptions. This results in certain communication opportunities being inefficient in terms of the amount of data received versus the power expended. In this paper, we propose a deep reinforcement learning (DRL) method using Deep Q-Networks that takes advantage of weather condition forecasts to improve energy efficiency while delivering the same number of packets as schemes that don't factor weather into routing decisions. We compare this method with simple approaches that utilize simple cloud cover thresholds to improve energy efficiency. In testing the DRL approach provides improved median energy efficiency without a significant reduction in median delivery ratio. Simple cloud cover thresholds were also found to be effective but the thresholds with the highest energy efficiency had reduced median delivery ratio values.
Problem

Research questions and friction points this paper is trying to address.

IoT
Satellite Communication
Energy Efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Deep Reinforcement Learning
Weather Forecast Integration
Energy-efficient Communication
🔎 Similar Papers
No similar papers found.
E
Ethan Fettes
Non-Terrestrial Networks (NTN) Lab, Department of Systems and Computer Engineering, Carleton University, Canada
P
Pablo G. Madoery
Non-Terrestrial Networks (NTN) Lab, Department of Systems and Computer Engineering, Carleton University, Canada
H
H. Yanikomeroglu
Non-Terrestrial Networks (NTN) Lab, Department of Systems and Computer Engineering, Carleton University, Canada
G
Gunes Karabulut-Kurt
Poly-Grames Research Center, Department of Electrical Engineering, Polytechnique Montréal, Montréal, Canada
Abhishek Naik
Abhishek Naik
National Research Council Canada
reinforcement learningartificial intelligence
Colin Bellinger
Colin Bellinger
University of Ottawa
Machine LearningReinforcement LearningRoboticsActive LearningLimited and Imbalanced Data
S
Stephane Martel
Satellite Systems, MDA, Canada
K
Khaled Ahmed
Satellite Systems, MDA, Canada
S
Sameera Siddiqui
Defence Research and Development Canada, Canada