CoCo-Fed: A Unified Framework for Memory- and Communication-Efficient Federated Learning at the Wireless Edge

📅 2026-01-02
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the critical challenges of high local memory consumption and backhaul bandwidth bottlenecks caused by high-dimensional model updates when deploying large-scale neural networks in wireless edge environments such as O-RAN. To overcome these limitations, the authors propose CoCo-Fed, a novel federated learning framework that enables memory-efficient local training through low-rank gradient projection and introduces an orthogonal subspace superposition transmission protocol to compress multi-layer updates into a single matrix communication. CoCo-Fed is the first approach to jointly optimize both local memory usage and global communication overhead in federated learning without requiring additional inference parameters, and it comes with theoretically provable convergence guarantees. Evaluated on a direction-of-arrival estimation task under non-IID settings, the method significantly reduces memory and communication costs while maintaining robust performance.

Technology Category

Application Category

📝 Abstract
The deployment of large-scale neural networks within the Open Radio Access Network (O-RAN) architecture is pivotal for enabling native edge intelligence. However, this paradigm faces two critical bottlenecks: the prohibitive memory footprint required for local training on resource-constrained gNBs, and the saturation of bandwidth-limited backhaul links during the global aggregation of high-dimensional model updates. To address these challenges, we propose CoCo-Fed, a novel Compression and Combination-based Federated learning framework that unifies local memory efficiency and global communication reduction. Locally, CoCo-Fed breaks the memory wall by performing a double-dimension down-projection of gradients, adapting the optimizer to operate on low-rank structures without introducing additional inference parameters/latency. Globally, we introduce a transmission protocol based on orthogonal subspace superposition, where layer-wise updates are projected and superimposed into a single consolidated matrix per gNB, drastically reducing the backhaul traffic. Beyond empirical designs, we establish a rigorous theoretical foundation, proving the convergence of CoCo-Fed even under unsupervised learning conditions suitable for wireless sensing tasks. Extensive simulations on an angle-of-arrival estimation task demonstrate that CoCo-Fed significantly outperforms state-of-the-art baselines in both memory and communication efficiency while maintaining robust convergence under non-IID settings.
Problem

Research questions and friction points this paper is trying to address.

Federated Learning
Memory Efficiency
Communication Efficiency
Wireless Edge
O-RAN
Innovation

Methods, ideas, or system contributions that make the work stand out.

Federated Learning
Memory Efficiency
Communication Efficiency
Gradient Compression
Orthogonal Subspace Superposition
🔎 Similar Papers
No similar papers found.
Z
Zhiheng Guo
School of Electronics and Information Technology, Sun Yat-sen University, Guangzhou 510275, China
Zhaoyang Liu
Zhaoyang Liu
Tongyi Lab, Alibaba Group
LLMRecommendation
Z
Zihan Cen
School of Electronics and Information Technology, Sun Yat-sen University, Guangzhou 510275, China
C
Chenyuan Feng
Department of Computer Science, University of Exeter, Exeter EX4 4QJ, U.K.
Xinghua Sun
Xinghua Sun
Sun Yat-sen University
stochastic modeling of wireless networksmachine learning for networking
Xiang Chen
Xiang Chen
Sun Yat-sen University
wireless communicationssignal processing
T
Tony Q. S. Quek
Singapore University of Technology and Design, Singapore
Xijun Wang
Xijun Wang
School of Electronics and Information Technology, Sun Yat-sen University
Age of InformationDeep Reinforcement Learning