🤖 AI Summary
This work addresses online learning optimization for communication systems operating over non-independent and identically distributed (non-IID), time-correlated channels. Specifically, it focuses on decoder adaptation under time-varying fading channels and dynamic codebook selection in time-varying additive noise channels. We establish the first theoretical framework for online learning in communication over time-dependent channels and propose a joint optimization algorithm based on Optimistic Online Mirror Descent (Optimistic OMD). We rigorously prove that the expected symbol error rate achieves a sublinear regret bound. Simulation results demonstrate that the proposed method significantly reduces the average symbol error rate compared to existing baselines, with strong agreement between theoretical bounds and empirical performance. The core contributions are: (i) the first online learning theory for communication over non-IID temporal channels; and (ii) a unified treatment of algorithm design, convergence analysis, and experimental validation.
📝 Abstract
Machine learning techniques have garnered great interest in designing communication systems owing to their capacity in tacking with channel uncertainty. To provide theoretical guarantees for learning-based communication systems, some recent works analyze generalization bounds for devised methods based on the assumption of Independently and Identically Distributed (I.I.D.) channels, a condition rarely met in practical scenarios. In this paper, we drop the I.I.D. channel assumption and study an online optimization problem of learning to communicate over time-correlated channels. To address this issue, we further focus on two specific tasks: optimizing channel decoders for time-correlated fading channels and selecting optimal codebooks for time-correlated additive noise channels. For utilizing temporal dependence of considered channels to better learn communication systems, we develop two online optimization algorithms based on the optimistic online mirror descent framework. Furthermore, we provide theoretical guarantees for proposed algorithms via deriving sub-linear regret bound on the expected error probability of learned systems. Extensive simulation experiments have been conducted to validate that our presented approaches can leverage the channel correlation to achieve a lower average symbol error rate compared to baseline methods, consistent with our theoretical findings.