🤖 AI Summary
Traditional AI-based channel encoders suffer from poor generalization, limited codeword testability, and inability to adapt across varying SNR regimes. To address these limitations, this paper proposes a multi-level deep autoencoder architecture based on convolutional neural networks. The design employs block-wise bit encoding and end-to-end joint optimization, enabling exhaustive layer-by-layer enumeration of all codewords—thereby significantly enhancing reliability and generalization over the full codebook. Additionally, a dynamically prunable hierarchical structure is introduced, allowing a single trained model to adaptively operate across a wide SNR range without retraining. Experimental results demonstrate that the proposed scheme matches or outperforms Polar codes and TurboAE-MOD across multiple SNR values, while exhibiting superior robustness and practical viability under full-codebook evaluation.
📝 Abstract
In this paper, we design a deep learning-based convolutional autoencoder for channel coding and modulation. The objective is to develop an adaptive scheme capable of operating at various signal-to-noise ratios (SNR)s without the need for re-training. Additionally, the proposed framework allows validation by testing all possible codes in the codebook, as opposed to previous AI-based encoder/decoder frameworks which relied on testing only a small subset of the available codes. This limitation in earlier methods often led to unreliable conclusions when generalized to larger codebooks. In contrast to previous methods, our multi-level encoding and decoding approach splits the message into blocks, where each encoder block processes a distinct group of $B$ bits. By doing so, the proposed scheme can exhaustively test $2^{B}$ possible codewords for each encoder/decoder level, constituting a layer of the overall scheme. The proposed model was compared to classical polar codes and TurboAE-MOD schemes, showing improved reliability with achieving comparable, or even superior results in some settings. Notably, the architecture can adapt to different SNRs by selectively removing one of the encoder/decoder layers without re-training, thus demonstrating flexibility and efficiency in practical wireless communication scenarios.