🤖 AI Summary
To address insufficient physical-layer security (PLS) and vulnerability to eavesdropping in B5G multi-cell networks, this paper proposes a federated learning (FL)-enabled multi-agent reinforcement learning (MA-RL) framework. Each base station operates as an independent agent, collaboratively optimizing the legitimate users’ secrecy rate using only local channel and interference information—without sharing raw user data. The framework innovatively integrates FL with two RL algorithms: robust deep policy gradient (RDPG) and deep Q-network (DQN), enabling distributed model aggregation while preserving data privacy. Experimental results demonstrate that RDPG achieves faster convergence and superior performance. The proposed method significantly enhances secrecy rate while maintaining low communication overhead and system complexity, thereby achieving an effective trade-off between privacy preservation and security gain.
📝 Abstract
This paper explores the application of a federated learning-based multi-agent reinforcement learning (MARL) strategy to enhance physical-layer security (PLS) in a multi-cellular network within the context of beyond 5G networks. At each cell, a base station (BS) operates as a deep reinforcement learning (DRL) agent that interacts with the surrounding environment to maximize the secrecy rate of legitimate users in the presence of an eavesdropper. This eavesdropper attempts to intercept the confidential information shared between the BS and its authorized users. The DRL agents are deemed to be federated since they only share their network parameters with a central server and not the private data of their legitimate users. Two DRL approaches, deep Q-network (DQN) and Reinforce deep policy gradient (RDPG), are explored and compared. The results demonstrate that RDPG converges more rapidly than DQN. In addition, we demonstrate that the proposed method outperforms the distributed DRL approach. Furthermore, the outcomes illustrate the trade-off between security and complexity.