🤖 AI Summary
Large language models (LLMs) exhibit limited reasoning and mathematical problem-solving capabilities in wireless communications—particularly for non-orthogonal multiple access (NOMA) tasks.
Method: This paper introduces the first structured, multi-hop question-answering dataset tailored to communications, and proposes a fine-grained fine-tuning framework based on Pointwise V-Information (PVI)—the first application of PVI theory to LLM fine-tuning. PVI quantifies per-sample information value, enabling sample-level optimization. The method integrates multi-agent collaborative generation, communication entity extraction, question synthesis, NOMA-specific mathematical modeling, and ROUGE-L evaluation.
Contribution/Results: Experiments show that PVI-based fine-tuning improves NOMA task performance by 1.31–2.24% and boosts ROUGE-L scores for summarization by 20.9%. These results empirically validate the feasibility and scalable adaptability of LLMs in domain-specific communication tasks.
📝 Abstract
In this work, we develop a specialized dataset aimed at enhancing the evaluation and fine-tuning of large language models (LLMs) specifically for wireless communication applications. The dataset includes a diverse set of multi-hop questions, including true/false and multiple-choice types, spanning varying difficulty levels from easy to hard. By utilizing advanced language models for entity extraction and question generation, rigorous data curation processes are employed to maintain high quality and relevance. Additionally, we introduce a Pointwise V-Information (PVI) based fine-tuning method, providing a detailed theoretical analysis and justification for its use in quantifying the information content of training data with 2.24% and 1.31% performance boost for different models compared to baselines, respectively. To demonstrate the effectiveness of the fine-tuned models with the proposed methodologies on practical tasks, we also consider different tasks, including summarizing optimization problems from technical papers and solving the mathematical problems related to non-orthogonal multiple access (NOMA), which are generated by using the proposed multi-agent framework. Simulation results show significant performance gain in summarization tasks with 20.9% in the ROUGE-L metrics. We also study the scaling laws of fine-tuning LLMs and the challenges LLMs face in the field of wireless communications, offering insights into their adaptation to wireless communication tasks. This dataset and fine-tuning methodology aim to enhance the training and evaluation of LLMs, contributing to advancements in LLMs for wireless communication research and applications.