🤖 AI Summary
Traditional joint source-channel coding (JSCC) suffers from heavy reliance on large-scale training data, modality-specific designs, high decoding complexity, and poor generalization. To address these limitations, this paper proposes Implicit-JSCC—a paradigm that operates per single-image instance via overfitting-driven implicit neural representation, directly optimizing channel symbols and an ultra-lightweight decoder (only 607 parameters, 641 multiplications per pixel), without any training dataset or pre-trained model. The method enables storage-free, cross-modal, single-shot offline encoding and repeated online decoding, inherently supporting source generalization. In high-SNR image transmission tasks, it achieves state-of-the-art (SOTA) performance while reducing decoding complexity by approximately three orders of magnitude—making it highly suitable for low-latency applications such as streaming media.
📝 Abstract
This paper introduces Implicit-JSCC, a novel overfitted joint source-channel coding paradigm that directly optimizes channel symbols and a lightweight neural decoder for each source. This instance-specific strategy eliminates the need for training datasets or pre-trained models, enabling a storage-free, modality-agnostic solution. As a low-complexity alternative, Implicit-JSCC achieves efficient image transmission with around 1000x lower decoding complexity, using as few as 607 model parameters and 641 multiplications per pixel. This overfitted design inherently addresses source generalizability and achieves state-of-the-art results in the high SNR regimes, underscoring its promise for future communication systems, especially streaming scenarios where one-time offline encoding supports multiple online decoding.