🤖 AI Summary
To address the efficiency bottleneck of relay communications under finite-capacity backhaul links, this paper proposes a task-driven neural compression-forward (CF) scheme. Methodologically, we introduce the first interpretable, end-to-end trainable neural CF architecture, integrating Wyner–Ziv distributed compression with jointly optimized compressor–demodulator training—requiring no prior knowledge of source statistics and enabling automatic learning of near-optimal index binning. The architecture is further tailored to finite-order modulation schemes. Our key contributions are: (i) the first differentiable and interpretable neural CF framework; and (ii) substantial performance gains over conventional non-learning CF approaches, achieving rates close to the Gaussian codebook capacity on classic relay channels—thereby empirically validating the efficacy and superiority of leveraging destination-side side information to enhance relay compression efficiency.
📝 Abstract
The relay channel, consisting of a source-destination pair along with a relay, is a fundamental component of cooperative communications. While the capacity of a general relay channel remains unknown, various relaying strategies, including compress-and-forward (CF), have been proposed. In CF, the relay forwards a quantized version of its received signal to the destination. Given the correlated signals at the relay and destination, distributed compression techniques, such as Wyner--Ziv coding, can be harnessed to utilize the relay-to-destination link more efficiently. Leveraging recent advances in neural network-based distributed compression, we revisit the relay channel problem and integrate a learned task-aware Wyner--Ziv compressor into a primitive relay channel with a finite-capacity out-of-band relay-to-destination link. The resulting neural CF scheme demonstrates that our compressor recovers binning of the quantized indices at the relay, mimicking the optimal asymptotic CF strategy, although no structure exploiting the knowledge of source statistics was imposed into the design. The proposed neural CF, employing finite order modulation, operates closely to the rate achievable in a primitive relay channel with a Gaussian codebook. We showcase the advantages of exploiting the correlated destination signal for relay compression through various neural CF architectures that involve end-to-end training of the compressor and the demodulator components. Our learned task-oriented compressors provide the first proof-of-concept work toward interpretable and practical neural CF relaying schemes.