Air-FedGA: A Grouping Asynchronous Federated Learning Mechanism Exploiting Over-the-air Computation

📅 2025-07-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the three key challenges of communication constraints, device heterogeneity, and non-IID data distributions in edge-based federated learning (FL), this paper proposes a grouped asynchronous FL framework leveraging over-the-air computation (AirComp). The method innovatively integrates AirComp’s analog aggregation capability with asynchronous model updates by partitioning edge devices into groups: intra-group aggregation is synchronized via AirComp, while inter-group updates proceed asynchronously—thereby relaxing global synchronization requirements and improving spectral efficiency. We theoretically establish convergence guarantees and jointly optimize transmit power control, denoising factors, and dynamic group partitioning strategies. Experimental evaluation on standard models and datasets demonstrates that the proposed approach accelerates training by 29.9%–71.6% over state-of-the-art baselines, significantly reducing overall training time while maintaining model accuracy.

Technology Category

Application Category

📝 Abstract
Federated learning (FL) is a new paradigm to train AI models over distributed edge devices (i.e., workers) using their local data, while confronting various challenges including communication resource constraints, edge heterogeneity and data Non-IID. Over-the-air computation (AirComp) is a promising technique to achieve efficient utilization of communication resource for model aggregation by leveraging the superposition property of a wireless multiple access channel (MAC). However, AirComp requires strict synchronization among edge devices, which is hard to achieve in heterogeneous scenarios. In this paper, we propose an AirComp-based grouping asynchronous federated learning mechanism (Air-FedGA), which combines the advantages of AirComp and asynchronous FL to address the communication and heterogeneity challenges. Specifically, Air-FedGA organizes workers into groups and performs over-the-air aggregation within each group, while groups asynchronously communicate with the parameter server to update the global model. In this way, Air-FedGA accelerates the FL model training by over-the-air aggregation, while relaxing the synchronization requirement of this aggregation technology. We theoretically prove the convergence of Air-FedGA. We formulate a training time minimization problem for Air-FedGA and propose the power control and worker grouping algorithm to solve it, which jointly optimizes the power scaling factors at edge devices, the denoising factors at the parameter server, as well as the worker grouping strategy. We conduct experiments on classical models and datasets, and the results demonstrate that our proposed mechanism and algorithm can speed up FL model training by 29.9%-71.6% compared with the state-of-the-art solutions.
Problem

Research questions and friction points this paper is trying to address.

Addresses communication constraints in federated learning using AirComp
Reduces synchronization needs in heterogeneous edge device scenarios
Optimizes power control and worker grouping for faster training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Grouping asynchronous federated learning mechanism
Over-the-air computation for efficient aggregation
Joint optimization of power and grouping
🔎 Similar Papers
No similar papers found.