🤖 AI Summary
To address the challenge of decentralized learning for multi-base-station cooperative interference management in cellular networks, this paper proposes a multi-agent reinforcement learning framework with selective experience sharing. Each base station agent shares only high-value local experiences—evaluated via signal-to-interference-plus-noise ratio (SINR)—enabling fully decentralized training and execution. We introduce a novel SINR-based experience relevance metric and a sparsified sharing mechanism, drastically reducing communication overhead while preserving learning efficacy. Experiments demonstrate that, with 75% less shared experience, the proposed method achieves 98% of the spectral efficiency attained by full-sharing baselines, significantly outperforming existing decentralized multi-agent RL approaches. The framework effectively balances learning performance, interference suppression, and system scalability.
📝 Abstract
We propose a novel multi-agent reinforcement learning (RL) approach for inter-cell interference mitigation, in which agents selectively share their experiences with other agents. Each base station is equipped with an agent, which receives signal-to-interference-plus-noise ratio from its own associated users. This information is used to evaluate and selectively share experiences with neighboring agents. The idea is that even a few pertinent experiences from other agents can lead to effective learning. This approach enables fully decentralized training and execution, minimizes information sharing between agents and significantly reduces communication overhead, which is typically the burden of interference management. The proposed method outperforms state-of-the-art multi-agent RL techniques where training is done in a decentralized manner. Furthermore, with a 75% reduction in experience sharing, the proposed algorithm achieves 98% of the spectral efficiency obtained by algorithms sharing all experiences.