Mixed-Precision Federated Learning via Multi-Precision Over-The-Air Aggregation

📅 2024-06-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing over-the-air federated learning (OTA-FL) assumes uniform client bitwidth, limiting adaptability to hardware heterogeneity in edge networks. This work proposes a mixed-precision OTA-FL framework enabling clients to concurrently perform approximate computation (AxC) at diverse bitwidths (e.g., 4-, 16-, and 32-bit), while supporting analog-domain superposition aggregation of multi-precision gradients over wireless channels. We introduce the first multi-precision gradient modulation mechanism, eliminating costly precision conversion overhead and jointly optimizing accuracy, energy efficiency, and system performance. Experimental results demonstrate that 4-bit clients achieve over 10% higher model accuracy compared to 32-bit and 16-bit baselines, with energy consumption reduced by 65% and 13%, respectively. The framework significantly enhances both efficiency and adaptability of heterogeneous edge systems.

Technology Category

Application Category

📝 Abstract
Over-the-Air Federated Learning (OTA-FL) is a privacy-preserving distributed learning mechanism, by aggregating updates in the electromagnetic channel rather than at the server. A critical research gap in existing OTA-FL research is the assumption of homogeneous client computational bit precision. While in real world application, clients with varying hardware resources may exploit approximate computing (AxC) to operate at different bit precisions optimized for energy and computational efficiency. And model updates of various precisions amongst clients poses an open challenge for OTA-FL, as it is incompatible in the wireless modulation superposition. Here, we propose an mixed-precision OTA-FL framework of clients with multiple bit precisions, demonstrating the following innovations: (i) the superior trade-off for both server and clients within the constraints of varying edge computing capabilities, energy efficiency, and learning accuracy requirements comparing to homogeneous client bit precision, and (ii) a multi-precision gradient modulation scheme to ensure compatibility with OTA aggregation and eliminate the overheads of precision conversion. Through case study with real world data, we validate our modulation scheme that enables AxC based mixed-precision OTA-FL. In comparison to homogeneous standard precision of 32-bit and 16-bit, our framework presents more than 10% in 4-bit ultra low precision client performance and over 65%and 13% of energy savings respectively. This demonstrates the great potential of our mixed-precision OTA-FL approach in heterogeneous edge computing environments.
Problem

Research questions and friction points this paper is trying to address.

Handles heterogeneous client bit precision in federated learning
Optimizes energy and computational efficiency
Ensures compatibility with over-the-air aggregation techniques
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mixed-precision OTA-FL framework
Multi-precision gradient modulation
Energy-efficient edge computing
🔎 Similar Papers
No similar papers found.