🤖 AI Summary
This work addresses the significant performance degradation of neural networks in edge inference due to mismatch between their noise-free training environments and noisy wireless channels. For the first time, PAC-Bayesian theory is introduced to analyze the impact of wireless channels, yielding an analytically tractable generalization error bound. Leveraging this bound, channel statistics are explicitly embedded into the neural network weight space, enabling the construction of a channel-aware enhanced model. The proposed approach requires no end-to-end retraining and substantially improves inference accuracy, effectively mitigating performance loss caused by channel noise. This paradigm offers a new pathway toward efficient and robust edge intelligence.
📝 Abstract
In the emerging paradigm of edge inference, neural networks (NNs) are partitioned across distributed edge devices that collaboratively perform inference via wireless transmission. However, standard NNs are generally trained in a noiseless environment, creating a mismatch with the noisy channels during edge deployment. In this paper, we address this issue by characterizing the channel-induced performance deterioration as a generalization error against unseen channels. We introduce an augmented NN model that incorporates channel statistics directly into the weight space, allowing us to derive PAC-Bayesian generalization bounds that explicitly quantifies the impact of wireless distortion. We further provide closed-form expressions for practical channels to demonstrate the tractability of these bounds. Inspired by the theoretical results, we propose a channel-aware training algorithm that minimizes a surrogate objective based on the derived bound. Simulations show that the proposed algorithm can effectively improve inference accuracy by leveraging channel statistics, without end-to-end re-training.