Energy Saving for Cell-Free Massive MIMO Networks: A Multi-Agent Deep Reinforcement Learning Approach

πŸ“… 2026-04-08
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the high energy consumption of cell-free massive MIMO networks in downlink transmission under dynamic traffic loads by proposing the first fully distributed multi-agent deep reinforcement learning (MADRL) framework. The approach enables each access point to autonomously and collaboratively decide on antenna reconfiguration and advanced sleep modes (ASM) without relying on a central controller, thereby adapting in real time to traffic fluctuations while maintaining quality of service. Experimental results demonstrate that the proposed method significantly improves energy efficiency: it reduces power consumption by 56.23% compared to a system without energy-saving mechanisms and achieves 30.12% additional savings over a non-learning baseline employing only the lightest sleep mode, with only a marginal increase in packet loss rate. Moreover, at comparable power consumption levels, it yields substantially lower packet loss than a DQN-based algorithm.
πŸ“ Abstract
This paper focuses on energy savings in downlink operation of cell-free massive MIMO (CF mMIMO) networks under dynamic traffic conditions. We propose a multi-agent deep reinforcement learning (MADRL) algorithm that enables each access point (AP) to autonomously control antenna re-configuration and advanced sleep mode (ASM) selection. After the training process, the proposed framework operates in a fully distributed manner, eliminating the need for centralized control and allowing each AP to dynamically adjust to real-time traffic fluctuations. Simulation results show that the proposed algorithm reduces power consumption (PC) by 56.23% compared to systems without any energy-saving scheme and by 30.12% relative to a non-learning mechanism that only utilizes the lightest sleep mode, with only a slight increase in drop ratio. Moreover, compared to the widely used deep Q-network (DQN) algorithm, it achieves a similar PC level but with a significantly lower drop ratio.
Problem

Research questions and friction points this paper is trying to address.

energy saving
cell-free massive MIMO
dynamic traffic
power consumption
downlink operation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-Agent Deep Reinforcement Learning
Cell-Free Massive MIMO
Energy Saving
Advanced Sleep Mode
Distributed Control
πŸ”Ž Similar Papers
No similar papers found.
Q
Qichen Wang
Department of Communication Systems, KTH Royal Institute of Technology, Sweden
K
Keyu Li
Department of Communication Systems, KTH Royal Institute of Technology, Sweden
Ozan Alp Topal
Ozan Alp Topal
KTH Royal Institute of Technology
Wireless CommunicationsGreen NetworksCell-free Massive MIMOConvex Optimization
Γ–
Γ–zlem Tugfe Demir
Department of Electrical and Electronics Engineering, Bilkent University, Turkiye
M
Mustafa Ozger
Department of Electronic Systems, Aalborg University, Denmark; Department of Communication Systems, KTH Royal Institute of Technology, Sweden
Cicek Cavdar
Cicek Cavdar
Professor of Communication Systems, KTH Royal Institute of Technology
Mobile NetworksDrone CommunicationsAI Enabled Network Management