Privacy-Preserving Semantic Communication over Wiretap Channels with Learnable Differential Privacy

📅 2025-10-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses image privacy leakage in semantic communication under eavesdropping channels, where conventional differential privacy (DP) methods rely on unrealistic assumptions—such as legitimate users having channel advantage or prior knowledge of the eavesdropper. Method: We propose a learnable DP framework that requires neither channel advantage nor eavesdropper-specific prior knowledge. It features a GAN-based inversion module for disentangled semantic representation extraction, jointly learns task-adaptive DP noise via neural generation, and optimizes the privacy–utility trade-off through end-to-end adversarial training. The framework supports flexible tuning of the privacy budget to adjust security levels. Results: Experiments demonstrate that, compared to standard DP and plaintext transmission, our method significantly degrades eavesdroppers’ image reconstruction quality (↑LPIPS, ↓FPPSR), while causing only marginal degradation in legitimate users’ downstream task performance. To the best of our knowledge, this is the first approach to achieve channel-agnostic and model-agnostic robust privacy protection at the semantic level.

Technology Category

Application Category

📝 Abstract
While semantic communication (SemCom) improves transmission efficiency by focusing on task-relevant information, it also raises critical privacy concerns. Many existing secure SemCom approaches rely on restrictive or impractical assumptions, such as favorable channel conditions for the legitimate user or prior knowledge of the eavesdropper's model. To address these limitations, this paper proposes a novel secure SemCom framework for image transmission over wiretap channels, leveraging differential privacy (DP) to provide approximate privacy guarantees. Specifically, our approach first extracts disentangled semantic representations from source images using generative adversarial network (GAN) inversion method, and then selectively perturbs private semantic representations with approximate DP noise. Distinct from conventional DP-based protection methods, we introduce DP noise with learnable pattern, instead of traditional white Gaussian or Laplace noise, achieved through adversarial training of neural networks (NNs). This design mitigates the inherent non-invertibility of DP while effectively protecting private information. Moreover, it enables explicitly controllable security levels by adjusting the privacy budget according to specific security requirements, which is not achieved in most existing secure SemCom approaches. Experimental results demonstrate that, compared with the previous DP-based method and direct transmission, the proposed method significantly degrades the reconstruction quality for the eavesdropper, while introducing only slight degradation in task performance. Under comparable security levels, our approach achieves an LPIPS advantage of 0.06-0.29 and an FPPSR advantage of 0.10-0.86 for the legitimate user compared with the previous DP-based method.
Problem

Research questions and friction points this paper is trying to address.

Protecting private semantic information in communication systems
Overcoming impractical security assumptions in semantic transmission
Providing controllable privacy guarantees with learnable DP noise
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses GAN inversion for semantic representation extraction
Applies learnable DP noise via adversarial training
Enables controllable security with adjustable privacy budget
🔎 Similar Papers
No similar papers found.