🤖 AI Summary
To address the growing challenges of spectrum scarcity, explosive growth in wireless devices, and consequent difficulties in signal identification, strong noise interference, and complex dynamic spectrum allocation, this paper pioneers the integration of large language models (LLMs) into cognitive radio systems. We propose two novel modules: Hybrid Prompt and Token Reprogramming (HPTR), which enables deep coupling between LLMs and signal processing via hybrid prompt engineering and token-level RF feature reprogramming; and Frequency-Attuned Fusion (FAF), which explicitly encodes frequency-domain priors to enhance high-frequency feature representation. Our framework unifies signal classification, denoising, and spectrum allocation within a single multi-task learning paradigm. Extensive experiments on multiple benchmark datasets demonstrate consistent and significant performance gains over state-of-the-art methods, establishing new SOTA results across all tasks.
📝 Abstract
The increasing scarcity of spectrum resources and the rapid growth of wireless device have made efficient management of radio networks a critical challenge. Cognitive Radio Technology (CRT), when integrated with deep learning (DL), offers promising solutions for tasks such as radio signal classification (RSC), signal denoising, and spectrum allocation. However, existing DL-based CRT frameworks are often task-specific and lack scalability to diverse real-world scenarios. Meanwhile, Large Language Models (LLMs) have demonstrated exceptional generalization capabilities across multiple domains, making them a potential candidate for advancing CRT technologies. In this paper, we introduce RadioLLM, a novel framework that incorporates Hybrid Prompt and Token Reprogramming (HPTR) and a Frequency Attuned Fusion (FAF) module to enhance LLMs for CRT tasks. HPTR enables the integration of radio signal features with expert knowledge, while FAF improves the modeling of high-frequency features critical for precise signal processing. These innovations allow RadioLLM to handle diverse CRT tasks, bridging the gap between LLMs and traditional signal processing methods. Extensive empirical studies on multiple benchmark datasets demonstrate that the proposed RadioLLM achieves superior performance over current baselines.