🤖 AI Summary
This study systematically investigates the robustness of AI-based channel decoders under minute channel perturbations, revealing their pronounced vulnerability to performance degradation under distributional shifts. By constructing both input-specific and universal adversarial perturbations—generated via the Fast Gradient Method (FGM) and projected gradient descent under ℓ²-norm constraints—the work presents the first evaluation of the out-of-distribution stability of state-of-the-art AI decoders such as ECCT and CrossMPT in non-AWGN settings. Experimental results demonstrate that even imperceptibly small adversarial perturbations can cause significant performance drops, with universal perturbations proving substantially more destructive than random perturbations of equivalent norm. These findings underscore a fundamental robustness deficiency in AI decoders compared to conventional belief propagation (BP) decoders, exposing a critical stability cost underlying their apparent performance gains.
📝 Abstract
Recent advances in deep learning have led to AI-based error correction decoders that report empirical performance improvements over traditional belief-propagation (BP) decoding on AWGN channels. While such gains are promising, a fundamental question remains: where do these improvements come from, and what cost is paid to achieve them? In this work, we study this question through the lens of robustness to distributional shifts at the channel output. We evaluate both input-dependent adversarial perturbations (FGM and projected gradient methods under $\ell_2$ constraints) and universal adversarial perturbations that apply a single norm-bounded shift to all received vectors. Our results show that recent AI decoders, including ECCT and CrossMPT, could suffer significant performance degradation under such perturbations, despite superior nominal performance under i.i.d. AWGN. Moreover, adversarial perturbations transfer relatively strongly between AI decoders but weakly to BP-based decoders, and universal perturbations are substantially more harmful than random perturbations of equal norm. These numerical findings suggest a potential robustness cost and higher sensitivity to channel distribution underlying recent AI decoding gains.