Energy-Efficient Flying LoRa Gateways: A Multi-Agent Reinforcement Learning Approach

📅 2025-02-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the low energy efficiency and resource constraints of LoRa devices in next-generation IoT (NG-IoT), this paper proposes a joint resource optimization framework for unmanned aerial vehicle (UAV)-mounted LoRa gateways. The framework jointly optimizes end-device association, transmission power, spreading factor, and bandwidth to maximize global system energy efficiency (EE). Innovatively, we introduce multi-agent proximal policy optimization (MAPPO) — for the first time — into flying LoRa networks. Under a partially observable Markov decision process (POMDP) formulation, the approach adopts a centralized training with decentralized execution (CTDE) paradigm to enable scalable and efficient learning. Experimental results demonstrate that the proposed method significantly improves system EE and consistently outperforms conventional multi-agent reinforcement learning baselines.

Technology Category

Application Category

📝 Abstract
With the rapid development of next-generation Internet of Things (NG-IoT) networks, the increasing number of connected devices has led to a surge in power consumption. This rise in energy demand poses significant challenges to resource availability and raises sustainability concerns for large-scale IoT deployments. Efficient energy utilization in communication networks, particularly for power-constrained IoT devices, has thus become a critical area of research. In this paper, we deployed flying LoRa gateways (GWs) mounted on unmanned aerial vehicles (UAVs) to collect data from LoRa end devices (EDs) and transmit it to a central server. Our primary objective is to maximize the global system energy efficiency (EE) of wireless LoRa networks by joint optimization of transmission power (TP), spreading factor (SF), bandwidth (W), and ED association. To solve this challenging problem, we model the problem as a partially observable Markov decision process (POMDP), where each flying LoRa GW acts as a learning agent using a cooperative Multi-Agent Reinforcement Learning (MARL) approach under centralized training and decentralized execution (CTDE). Simulation results demonstrate that our proposed method, based on the multi-agent proximal policy optimization (MAPPO) algorithm, significantly improves the global system EE and surpasses the conventional MARL schemes.
Problem

Research questions and friction points this paper is trying to address.

Optimize energy efficiency in LoRa networks
Deploy UAV-mounted LoRa gateways for data collection
Use MARL for joint parameter optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Flying LoRa gateways on UAVs
Multi-Agent Reinforcement Learning approach
MAPPO algorithm for energy efficiency
🔎 Similar Papers
No similar papers found.