Wireless Power Control Based on Large Language Models

๐Ÿ“… 2026-02-28
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work proposes a novel approach based on pre-trained large language models (LLMs) to address the high computational complexity of power control in dense interference-limited wireless networks and the loss of critical interference information caused by graph neural network aggregation. By injecting the physical channel gain matrix into the Transformerโ€™s self-attention mechanism, the method explicitly fuses wireless topology with LLM priors and introduces an interference-aware attention bias. The study reveals that shallow layers encode topological relationships effectively while deeper layers introduce semantic noise, leading to a lightweight adaptation strategy that halves model depth without sacrificing performance. Requiring no retraining from scratch and enabling zero-shot transfer, the approach significantly outperforms conventional optimization and graph neural network methods across diverse scenarios, achieving superior generalization, low inference overhead, and state-of-the-art spectral efficiency.

Technology Category

Application Category

๐Ÿ“ Abstract
This paper investigates the power control problem in wireless networks by repurposing pre-trained large language models (LLMs) as relational reasoning backbones. In hyper-connected interference environments, traditional optimization methods face high computational cost, while standard message passing neural networks suffer from aggregation bottlenecks that can obscure critical high-interference structures. In response, we propose PC-LLM, a physics-informed framework that augments a pre-trained Transformer with an interference-aware attention bias. The proposed bias tuning mechanism injects the physical channel gain matrix directly into the self-attention logits, enabling explicit fusion of wireless topology with pre-trained relational priors without retraining the backbone from scratch. Extensive experiments demonstrate that PC-LLM consistently outperforms both traditional optimization methods and state-of-the-art graph neural network baselines, while exhibiting exceptional zero-shot generalization to unseen environments. We further observe a structural-semantic decoupling phenomenon: Topology-relevant relational reasoning is concentrated in shallow layers, whereas deeper layers encode task-irrelevant semantic noise. Motivated by this finding, we develop a lightweight adaptation strategy that reduces model depth by 50\%, significantly lowering inference cost while preserving state-of-the-art spectral efficiency.
Problem

Research questions and friction points this paper is trying to address.

wireless power control
interference management
large language models
relational reasoning
zero-shot generalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

large language models
wireless power control
physics-informed attention
zero-shot generalization
structural-semantic decoupling
๐Ÿ”Ž Similar Papers
No similar papers found.