🤖 AI Summary
Joint uplink-downlink power allocation in cell-free massive MIMO networks under dynamic user loads poses severe scalability challenges due to high computational complexity.
Method: This paper proposes a Tree-Transformer hybrid architecture. It employs a binary tree to hierarchically compress user features into a single global root node, where a lightweight Transformer encoder is applied—achieving O(log N) depth and O(N) linear complexity. The design integrates tree-based feature aggregation, a shared-parameter decoder, and max-min fairness modeling.
Contribution/Results: The method enables zero-shot generalization to arbitrary numbers of users without retraining. Experiments demonstrate near-optimal performance in power allocation while accelerating inference by over an order of magnitude compared to full-attention baselines. It effectively breaks the scalability bottleneck inherent in conventional attention-based approaches for large-scale cell-free systems.
📝 Abstract
Power allocation remains a fundamental challenge in wireless communication networks, particularly under dynamic user loads and large-scale deployments. While Transformerbased models have demonstrated strong performance, their computational cost scales poorly with the number of users. In this work, we propose a novel hybrid Tree-Transformer architecture that achieves scalable per-user power allocation. Our model compresses user features via a binary tree into a global root representation, applies a Transformer encoder solely to this root, and decodes per-user uplink and downlink powers through a shared decoder. This design achieves logarithmic depth and linear total complexity, enabling efficient inference across large and variable user sets without retraining or architectural changes. We evaluate our model on the max-min fairness problem in cellfree massive MIMO systems and demonstrate that it achieves near-optimal performance while significantly reducing inference time compared to full-attention baselines.