🤖 AI Summary
In multi-hop AWGN channels, DeepJSCC suffers from semantic misalignment and degraded perceptual reconstruction quality due to cumulative noise. To address this, we propose a semantic-clustering-constrained multi-hop DeepJSCC framework. Our core innovation is a pre-trained deep hash distillation module that enforces semantic alignment across hops by using hash-code consistency as a semantic anchor. We jointly optimize reconstruction loss via MSE and cosine distance, with LPIPS serving as the primary perceptual quality metric. The proposed method effectively mitigates noise accumulation, significantly improving both semantic fidelity and perceptual quality of reconstructed images across diverse multi-hop configurations—achieving 12.7%–23.4% LPIPS reduction. Additionally, it enhances content retrievability and transmission security through semantic clustering and hash-based constraints.
📝 Abstract
We consider image transmission via deep joint source-channel coding (DeepJSCC) over multi-hop additive white Gaussian noise (AWGN) channels by training a DeepJSCC encoder-decoder pair with a pre-trained deep hash distillation (DHD) module to semantically cluster images, facilitating security-oriented applications through enhanced semantic consistency and improving the perceptual reconstruction quality. We train the DeepJSCC module to both reduce mean square error (MSE) and minimize cosine distance between DHD hashes of source and reconstructed images. Significantly improved perceptual quality as a result of semantic alignment is illustrated for different multi-hop settings, for which classical DeepJSCC may suffer from noise accumulation, measured by the learned perceptual image patch similarity (LPIPS) metric.