🤖 AI Summary
This work addresses the critical challenges of high local memory consumption and backhaul bandwidth bottlenecks caused by high-dimensional model updates when deploying large-scale neural networks in wireless edge environments such as O-RAN. To overcome these limitations, the authors propose CoCo-Fed, a novel federated learning framework that enables memory-efficient local training through low-rank gradient projection and introduces an orthogonal subspace superposition transmission protocol to compress multi-layer updates into a single matrix communication. CoCo-Fed is the first approach to jointly optimize both local memory usage and global communication overhead in federated learning without requiring additional inference parameters, and it comes with theoretically provable convergence guarantees. Evaluated on a direction-of-arrival estimation task under non-IID settings, the method significantly reduces memory and communication costs while maintaining robust performance.
📝 Abstract
The deployment of large-scale neural networks within the Open Radio Access Network (O-RAN) architecture is pivotal for enabling native edge intelligence. However, this paradigm faces two critical bottlenecks: the prohibitive memory footprint required for local training on resource-constrained gNBs, and the saturation of bandwidth-limited backhaul links during the global aggregation of high-dimensional model updates. To address these challenges, we propose CoCo-Fed, a novel Compression and Combination-based Federated learning framework that unifies local memory efficiency and global communication reduction. Locally, CoCo-Fed breaks the memory wall by performing a double-dimension down-projection of gradients, adapting the optimizer to operate on low-rank structures without introducing additional inference parameters/latency. Globally, we introduce a transmission protocol based on orthogonal subspace superposition, where layer-wise updates are projected and superimposed into a single consolidated matrix per gNB, drastically reducing the backhaul traffic. Beyond empirical designs, we establish a rigorous theoretical foundation, proving the convergence of CoCo-Fed even under unsupervised learning conditions suitable for wireless sensing tasks. Extensive simulations on an angle-of-arrival estimation task demonstrate that CoCo-Fed significantly outperforms state-of-the-art baselines in both memory and communication efficiency while maintaining robust convergence under non-IID settings.