🤖 AI Summary
To address the challenges of time-varying user and access point (AP) counts, high computational complexity, and poor generalizability of conventional power allocation methods in dynamic large-scale cell-free massive MIMO systems, this paper proposes an end-to-end joint uplink-downlink power allocation framework based on a position-encoding Transformer. It is the first to employ a purely geometry-driven Transformer architecture for max-min fairness power prediction, enabling zero-shot generalization to arbitrary network scales without retraining or architectural modification. The method integrates spatial position encoding, end-to-end supervised learning, and joint uplink-downlink power mapping modeling. Experimental results demonstrate a utility rate of 98.7%, robust performance under ±50% fluctuations in user/AP count, and two orders-of-magnitude reduction in inference latency—significantly outperforming state-of-the-art optimization-based and deep learning approaches.
📝 Abstract
Power allocation is an important task in wireless communication networks. Classical optimization algorithms and deep learning methods, while effective in small and static scenarios, become either computationally demanding or unsuitable for large and dynamic networks with varying user loads. This letter explores the potential of transformer-based deep learning models to address these challenges. We propose a transformer neural network to jointly predict optimal uplink and downlink power using only user and access point positions. The max-min fairness problem in cell-free massive multiple input multiple output systems is considered. Numerical results show that the trained model provides near-optimal performance and adapts to varying numbers of users and access points without retraining, additional processing, or updating its neural network architecture. This demonstrates the effectiveness of the proposed model in achieving robust and flexible power allocation for dynamic networks.