Enabling Training-Free Semantic Communication Systems with Generative Diffusion Models

šŸ“… 2025-05-02
šŸ“ˆ Citations: 0
✨ Influential: 0
šŸ“„ PDF
šŸ¤– AI Summary
Existing semantic communication systems rely either on large-scale data training or channel priors, struggling to simultaneously achieve generalizability and noise robustness. To address this, we propose the first training-free end-to-end semantic communication framework, pioneering the integration of generative diffusion models (GDMs) into semantic encoding and decoding. Our method introduces a transmitter–receiver-coordinated two-stage forward diffusion and DDIM-based inverse process, enabling noise-adaptive reconstruction without explicit channel modeling or data-driven training. Furthermore, we devise a noise-aware sampling step optimization strategy to enhance reconstruction fidelity under varying noise conditions. Evaluated on the Kodak dataset, our approach substantially outperforms state-of-the-art baselines—achieving a 3.2 dB PSNR gain and a 0.08 SSIM improvement—demonstrating, for the first time, the feasibility of high performance and strong robustness within a training-free paradigm.

Technology Category

Application Category

šŸ“ Abstract
Semantic communication (SemCom) has recently emerged as a promising paradigm for next-generation wireless systems. Empowered by advanced artificial intelligence (AI) technologies, SemCom has achieved significant improvements in transmission quality and efficiency. However, existing SemCom systems either rely on training over large datasets and specific channel conditions or suffer from performance degradation under channel noise when operating in a training-free manner. To address these issues, we explore the use of generative diffusion models (GDMs) as training-free SemCom systems. Specifically, we design a semantic encoding and decoding method based on the inversion and sampling process of the denoising diffusion implicit model (DDIM), which introduces a two-stage forward diffusion process, split between the transmitter and receiver to enhance robustness against channel noise. Moreover, we optimize sampling steps to compensate for the increased noise level caused by channel noise. We also conduct a brief analysis to provide insights about this design. Simulations on the Kodak dataset validate that the proposed system outperforms the existing baseline SemCom systems across various metrics.
Problem

Research questions and friction points this paper is trying to address.

Training-free semantic communication systems using generative diffusion models
Enhancing robustness against channel noise in semantic communication
Optimizing sampling steps to compensate for increased noise levels
Innovation

Methods, ideas, or system contributions that make the work stand out.

Training-free semantic communication using generative diffusion models
Two-stage forward diffusion for noise robustness
Optimized sampling steps to counteract channel noise
šŸ”Ž Similar Papers
No similar papers found.