🤖 AI Summary
Traditional signal detection methods suffer from poor robustness, high computational complexity, and limited generalization capability under BPSK/QAM modulation. To address these limitations, this paper proposes a novel signal detection framework based on denoising diffusion models. Methodologically, we formulate an intelligent detection theory driven by stochastic differential equations (SDEs), establish for the first time an explicit mathematical mapping between signal-to-noise ratio (SNR) and diffusion steps, and introduce a fine-tuning-free mathematical scaling normalization technique enabling zero-shot generalization across SNR regimes. Experiments demonstrate that our approach achieves significantly lower symbol error rates than maximum-likelihood (ML) estimation for both BPSK and QAM, while maintaining only O(n²) computational complexity—balancing high accuracy and efficiency. The core contribution lies in the theoretical reconstruction and engineering adaptation of diffusion models specifically for communication signal detection.
📝 Abstract
In this paper, a signal detection method based on the denoise diffusion model (DM) is proposed, which outperforms the maximum likelihood (ML) estimation method that has long been regarded as the optimal signal detection technique. Theoretically, a novel mathematical theory for intelligent signal detection based on stochastic differential equations (SDEs) is established in this paper, demonstrating the effectiveness of DM in reducing the additive white Gaussian noise in received signals. Moreover, a mathematical relationship between the signal-to-noise ratio (SNR) and the timestep in DM is established, revealing that for any given SNR, a corresponding optimal timestep can be identified. Furthermore, to address potential issues with out-of-distribution inputs in the DM, we employ a mathematical scaling technique that allows the trained DM to handle signal detection across a wide range of SNRs without any fine-tuning. Building on the above theoretical foundation, we propose a DM-based signal detection method, with the diffusion transformer (DiT) serving as the backbone neural network, whose computational complexity of this method is $mathcal{O}(n^2)$. Simulation results demonstrate that, for BPSK and QAM modulation schemes, the DM-based method achieves a significantly lower symbol error rate (SER) compared to ML estimation, while maintaining a much lower computational complexity.