Empowering Large Language Models in Wireless Communication: A Novel Dataset and Fine-Tuning Framework

📅 2025-01-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit limited reasoning and mathematical problem-solving capabilities in wireless communications—particularly for non-orthogonal multiple access (NOMA) tasks. Method: This paper introduces the first structured, multi-hop question-answering dataset tailored to communications, and proposes a fine-grained fine-tuning framework based on Pointwise V-Information (PVI)—the first application of PVI theory to LLM fine-tuning. PVI quantifies per-sample information value, enabling sample-level optimization. The method integrates multi-agent collaborative generation, communication entity extraction, question synthesis, NOMA-specific mathematical modeling, and ROUGE-L evaluation. Contribution/Results: Experiments show that PVI-based fine-tuning improves NOMA task performance by 1.31–2.24% and boosts ROUGE-L scores for summarization by 20.9%. These results empirically validate the feasibility and scalable adaptability of LLMs in domain-specific communication tasks.

Technology Category

Application Category

📝 Abstract
In this work, we develop a specialized dataset aimed at enhancing the evaluation and fine-tuning of large language models (LLMs) specifically for wireless communication applications. The dataset includes a diverse set of multi-hop questions, including true/false and multiple-choice types, spanning varying difficulty levels from easy to hard. By utilizing advanced language models for entity extraction and question generation, rigorous data curation processes are employed to maintain high quality and relevance. Additionally, we introduce a Pointwise V-Information (PVI) based fine-tuning method, providing a detailed theoretical analysis and justification for its use in quantifying the information content of training data with 2.24% and 1.31% performance boost for different models compared to baselines, respectively. To demonstrate the effectiveness of the fine-tuned models with the proposed methodologies on practical tasks, we also consider different tasks, including summarizing optimization problems from technical papers and solving the mathematical problems related to non-orthogonal multiple access (NOMA), which are generated by using the proposed multi-agent framework. Simulation results show significant performance gain in summarization tasks with 20.9% in the ROUGE-L metrics. We also study the scaling laws of fine-tuning LLMs and the challenges LLMs face in the field of wireless communications, offering insights into their adaptation to wireless communication tasks. This dataset and fine-tuning methodology aim to enhance the training and evaluation of LLMs, contributing to advancements in LLMs for wireless communication research and applications.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Wireless Communication
Non-Orthogonal Multiple Access (NOMA)
Innovation

Methods, ideas, or system contributions that make the work stand out.

Wireless Communication
Point-to-V Information (PVI) Tuning Method
Large Language Model Optimization
🔎 Similar Papers
No similar papers found.
Y
Yushen Lin
School of Electrical and Electronic Engineering, The University of Manchester, M13 9PL, U.K.
Ruichen Zhang
Ruichen Zhang
Nanyang Technological University
Next-generation NetworkingEdge IntelligenceAgentic AIReinforcement learningLLM
Wenqi Huang
Wenqi Huang
Technical University of Munich
Image ReconstructionMagnetic Resonance ImagingImplicit Neural Representations
K
Kaidi Wang
School of Electrical and Electronic Engineering, The University of Manchester, M13 9PL, U.K.
Zhiguo Ding
Zhiguo Ding
University of Manchester and Khalifa University, Fellow of IEEE, Web of Science Highly Cited
Wireless communicationssignal processingand cross-layer optimization
D
Daniel K. C. So
School of Electrical and Electronic Engineering, The University of Manchester, M13 9PL, U.K.
D
D. Niyato
College of Computing and Data Science, Nanyang Technological University, Singapore