Deep Variable-Length Feedback Codes

📅 2026-02-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes DeepVLF, a novel deep learning framework for variable-length feedback channel coding that overcomes limitations of existing methods—such as fixed blocklengths, performance degradation at high code rates, and lack of adaptivity. DeepVLF introduces two architectures: receiver-driven (DeepVLF-R) and transmitter-driven (DeepVLF-T), integrating bit grouping, Transformer-based encoders and decoders, and a dynamic termination mechanism. Operating over both AWGN and 5G-NR fading channels, the framework autonomously learns a two-stage strategy resembling the Schalkwijk–Kailath scheme, offering both interpretability and alignment with information-theoretic principles. Experimental results demonstrate that DeepVLF reduces channel uses by 20%–55% at the same block error rate compared to state-of-the-art approaches, while effectively mitigating the error floor in high-rate regimes.

Technology Category

Application Category

📝 Abstract
Deep learning has enabled significant advances in feedback-based channel coding, yet existing learned schemes remain fundamentally limited: they employ fixed block lengths, suffer degraded performance at high rates, and cannot fully exploit the adaptive potential of feedback. This paper introduces Deep Variable-Length Feedback (DeepVLF) coding, a flexible coding framework that dynamically adjusts transmission length via learned feedback. We propose two complementary architectures: DeepVLF-R, where termination is receiver-driven, and DeepVLF-T, where the transmitter controls termination. Both architectures leverage bit-group partitioning and transformer-based encoder-decoder networks to enable fine-grained rate adaptation in response to feedback. Evaluations over AWGN and 5G-NR fading channels demonstrate that DeepVLF substantially outperforms state-of-the-art learned feedback codes. It achieves the same block error rate with 20%-55% fewer channel uses and lowers error floors by orders of magnitude, particularly in high-rate regimes. Encoding dynamics analysis further reveals that the models autonomously learn a two-phase strategy analogous to classical Schalkwijk-Kailath coding: an initial information-carrying phase followed by a noise-cancellation refinement phase. This emergent behavior underscores the interpretability and information-theoretic alignment of the learned codes.
Problem

Research questions and friction points this paper is trying to address.

feedback coding
variable-length coding
deep learning
channel coding
rate adaptation
Innovation

Methods, ideas, or system contributions that make the work stand out.

DeepVLF
variable-length feedback coding
transformer-based encoder-decoder
rate adaptation
learned channel coding
🔎 Similar Papers
No similar papers found.
Y
Yu Ding
Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong S.A.R.
Yulin Shao
Yulin Shao
University of Hong Kong
Coding and ModulationMachine LearningStochastic Control