🤖 AI Summary
This work addresses the challenge of deploying general-purpose error-correcting code (ECC) Transformer decoders, which are hindered by high computational complexity and large parameter counts. To overcome this, the authors propose the Spectral Alignment Pruning (SAP) framework, which— for the first time—incorporates the spectral properties of ECC bipartite graphs into structured pruning. SAP generates pruning masks that are shareable across different code types and integrates low-rank adaptation (LoRA) for lightweight, code-specific performance recovery. The method achieves decoding performance comparable to code-specific pruning strategies across multiple ECC families while substantially reducing both computational overhead and memory footprint, effectively balancing efficiency with generalization capability.
📝 Abstract
Recently, the Foundation Error Correction Code Transformer (FECCT) has emerged as a promising universal channel decoder, achieving competitive decoding performance across diverse code families by relying on a single shared model backbone, optionally followed by code-specific retraining. Despite this flexibility, the high computational complexity and large parameter footprint of transformer-based decoders present substantial obstacles to practical deployment. To address these challenges, we investigate structured pruning for FECCT and propose Spectral-Aligned Pruning (SAP), a structure-aware framework that enables cross-code reuse of structured pruning masks across codes by leveraging the spectrum of the corresponding bipartite graph. After pruning, SAP performs per-code recovery via parameter-efficient low-rank adaptation (LoRA), enabling a shared pruned backbone while storing only small code-specific adapter parameters. Experiments across diverse codes show that SAP achieves decoding performance comparable to dedicated per-code pruning, while enabling substantial reductions in computational cost and model memory footprint through kernel-level structured pruning.