🤖 AI Summary
This work addresses the distributed neural compress-and-forward (CF) problem over diamond relay channels, aiming for fully distributed collaborative compression—without inter-relay coordination—while exploiting source-relay input correlations. We propose an end-to-end learned framework for distributed quantization and joint decoding, incorporating differentiable quantizers and finite-order modulation constraints to circumvent conventional codebook design and explicit coordination overhead. The method approaches the Berger–Tung outer bound without inter-relay communication and achieves performance close to known capacity bounds in the two-relay setting. Key contributions include: (i) the first extension of neural CF to multi-relay topologies; (ii) the establishment of the first distributed learning-based compression paradigm that requires no inter-relay signaling; and (iii) empirical validation of both scalability and near-optimality of neural compression schemes in complex relay networks.
📝 Abstract
The diamond relay channel, where a source communicates with a destination via two parallel relays, is one of the canonical models for cooperative communications. We focus on the primitive variant, where each relay observes a noisy version of the source signal and forwards a compressed description over an orthogonal, noiseless, finite-rate link to the destination. Compress-and-forward (CF) is particularly effective in this setting, especially under oblivious relaying where relays lack access to the source codebook. While neural CF methods have been studied in single-relay channels, extending them to the two-relay case is non-trivial, as it requires fully distributed compression without any inter-relay coordination. We demonstrate that learning-based quantizers at the relays can harness input correlations by operating remote, yet in a collaborative fashion, enabling effective distributed compression in line with Berger-Tung-style coding. Each relay separately compresses its observation using a one-shot learned quantizer, and the destination jointly decodes the source message. Simulation results show that the proposed scheme, trained end-to-end with finite-order modulation, operates close to the known theoretical bounds. These results demonstrate that neural CF can scale to multi-relay systems while maintaining both performance and interpretability.