🤖 AI Summary
This work addresses the challenge of achieving both high detection accuracy and low latency in real-time spectrum sensing. The authors propose the first systematic application of image processing techniques to this problem by treating spectrograms as images. Their approach integrates adaptive thresholding, morphological operations, and connected-component labeling, and is implemented within a multi-threaded parallel architecture to meet stringent latency requirements. Evaluated under a 100 MHz bandwidth with a raw I/Q input rate of 3.2 Gbps, the system achieves a detection rate of 80.42% at an IoU threshold of 0.4, outperforming state-of-the-art learning-based methods such as DeepRadar and GPU-accelerated U-Net. Specifically, it reduces latency by a factor of 20.51 and improves IoU by 22.31% compared to Searchlight.
📝 Abstract
Energy detection is widely used for spectrum sensing, but accurately localizing the time and frequency occupation of signals in real-time for efficient spectrum sharing remains challenging. To address this challenge, we present RISE, a software-based spectrum sensing system designed for real-time signal detection and localization. RISE treats time-frequency spectrum plots as images and applies adaptive thresholding, morphological operations, and connected component labeling with a multi-threaded architecture. We evaluate RISE using both synthetic data and controlled over-the-air (OTA) experiments across diverse signal types. Results show that RISE satisfies real-time latency constraints while achieving a probability of detection of 80.42% at an intersection-over-union (IoU) threshold of 0.4. RISE sustains a raw I/Q input rate of 3.2 Gbps for 100 MHz bandwidth sensing with time and frequency resolutions of 10.24 us and 97.6 kHz, respectively. Compared to Searchlight, a representative energy-based method, RISE achieves 20.51x lower latency and 22.31% higher IoU. Compared to machine learning baselines, RISE improves IoU by 56.02% over DeepRadar while meeting the real-time deadline, which a GPU-accelerated U-Net exceeds by 213.38x.