Large Language Model Enabled Multi-Task Physical Layer Network

📅 2024-12-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high resource overhead and deployment costs arising from independent modeling of multiple physical-layer tasks—such as multi-user precoding, signal detection, and channel prediction—in 6G systems, this paper proposes the first large language model (LLM)-driven multi-task physical-layer framework. Our method introduces a dedicated multi-task instruction module, an input encoder and output decoder tailored for heterogeneous wireless signals, and integrates instruction tuning with multi-task learning. The resulting unified architecture jointly optimizes performance across all tasks. Simulation results demonstrate that the framework achieves superior overall accuracy and generalization compared to task-specific single-model baselines, improves inference efficiency by over 40%, and significantly reduces training resource consumption, GPU memory footprint, and deployment complexity. This work establishes a scalable LLM paradigm for intelligent 6G physical layers.

Technology Category

Application Category

📝 Abstract
The recent advance of Artificial Intelligence (AI) is continuously reshaping the future 6G wireless communications. Recently, the development of Large Language Models (LLMs) offers a promising approach to effectively improve the performance and generalization for different physical layer tasks. However, most existing works finetune dedicated LLM networks for a single wireless communication task separately. Thus performing diverse physical layer tasks introduces extremely high training resources, memory usage, and deployment costs. To solve the problem, we propose a LLM-enabled multi-task physical layer network to unify multiple tasks with a single LLM. Specifically, we first propose a multi-task LLM framework, which finetunes LLM to perform multi-user precoding, signal detection and channel prediction simultaneously. Besides, multi-task instruction module, input encoders, as well as output decoders, are elaborately designed to distinguish multiple tasks and adapted the features of different formats of wireless data for the features of LLM. Numerical simulations are also displayed to verify the effectiveness of the proposed method.
Problem

Research questions and friction points this paper is trying to address.

Multi-task Learning
Wireless Communication
Large Language Models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-task Processing
Large Language Model
Modular Design
🔎 Similar Papers
No similar papers found.