🤖 AI Summary
To address privacy preservation, excessive communication overhead, and insufficient real-time performance in multi-base-station collaborative modeling for autonomous mobile networks, this paper proposes a lightweight federated learning framework tailored for real-time feature prediction—including ping latency, signal-to-noise ratio (SNR), and frequency band—in V2X scenarios. Innovatively, it integrates the ISO/IEC Neural Network Coding (NNC) standard with miniature language models (Tiny LMs) into the federated learning pipeline to achieve lossless model parameter compression. We further develop NNCodec, an efficient, custom implementation of the NNC standard. Evaluated on the Berlin V2X dataset, the framework achieves negligible model accuracy degradation (i.e., transparent compression), reduces communication load to under 1% of the original parameter transmission volume, and significantly enhances bandwidth efficiency and training reliability.
📝 Abstract
In telecommunications, Autonomous Networks (ANs) automatically adjust configurations based on specific requirements (e.g., bandwidth) and available resources. These networks rely on continuous monitoring and intelligent mechanisms for self-optimization, self-repair, and self-protection, nowadays enhanced by Neural Networks (NNs) to enable predictive modeling and pattern recognition. Here, Federated Learning (FL) allows multiple AN cells - each equipped with NNs - to collaboratively train models while preserving data privacy. However, FL requires frequent transmission of large neural data and thus an efficient, standardized compression strategy for reliable communication. To address this, we investigate NNCodec, a Fraunhofer implementation of the ISO/IEC Neural Network Coding (NNC) standard, within a novel FL framework that integrates tiny language models (TLMs) for various mobile network feature prediction (e.g., ping, SNR or band frequency). Our experimental results on the Berlin V2X dataset demonstrate that NNCodec achieves transparent compression (i.e., negligible performance loss) while reducing communication overhead to below 1%, showing the effectiveness of combining NNC with FL in collaboratively learned autonomous mobile networks.