π€ AI Summary
This work addresses the high energy consumption of cell-free massive MIMO networks in downlink transmission under dynamic traffic loads by proposing the first fully distributed multi-agent deep reinforcement learning (MADRL) framework. The approach enables each access point to autonomously and collaboratively decide on antenna reconfiguration and advanced sleep modes (ASM) without relying on a central controller, thereby adapting in real time to traffic fluctuations while maintaining quality of service. Experimental results demonstrate that the proposed method significantly improves energy efficiency: it reduces power consumption by 56.23% compared to a system without energy-saving mechanisms and achieves 30.12% additional savings over a non-learning baseline employing only the lightest sleep mode, with only a marginal increase in packet loss rate. Moreover, at comparable power consumption levels, it yields substantially lower packet loss than a DQN-based algorithm.
π Abstract
This paper focuses on energy savings in downlink operation of cell-free massive MIMO (CF mMIMO) networks under dynamic traffic conditions. We propose a multi-agent deep reinforcement learning (MADRL) algorithm that enables each access point (AP) to autonomously control antenna re-configuration and advanced sleep mode (ASM) selection. After the training process, the proposed framework operates in a fully distributed manner, eliminating the need for centralized control and allowing each AP to dynamically adjust to real-time traffic fluctuations. Simulation results show that the proposed algorithm reduces power consumption (PC) by 56.23% compared to systems without any energy-saving scheme and by 30.12% relative to a non-learning mechanism that only utilizes the lightest sleep mode, with only a slight increase in drop ratio. Moreover, compared to the widely used deep Q-network (DQN) algorithm, it achieves a similar PC level but with a significantly lower drop ratio.