Privacy Enhancement in Over-the-Air Federated Learning via Adaptive Receive Scaling

📅 2025-10-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In over-the-air aggregation (OTAA) wireless federated learning, the receiver scaling factor must balance training convergence and device-level privacy: excessive scaling weakens channel noise masking and degrades differential privacy (DP) guarantees, while insufficient scaling harms signal-to-noise ratio (SNR) and slows convergence. This paper proposes AdaScale, an adaptive scaling mechanism that optimizes the receiver scaling factor per round under dynamic wireless channels. It introduces the first formulation of time-averaged Rényi differential privacy (RDP) leakage as a stochastic optimization objective, addressed via Lyapunov-type constraints and online optimization. Theoretical analysis establishes asymptotically optimal dynamic regret bounds and convergence guarantees. Experiments demonstrate that AdaScale achieves comparable model accuracy to state-of-the-art methods while significantly reducing RDP/DP privacy leakage (average reduction of 38.2%)—effectively reconciling communication reliability with privacy robustness.

Technology Category

Application Category

📝 Abstract
In Federated Learning (FL) with over-the-air aggregation, the quality of the signal received at the server critically depends on the receive scaling factors. While a larger scaling factor can reduce the effective noise power and improve training performance, it also compromises the privacy of devices by reducing uncertainty. In this work, we aim to adaptively design the receive scaling factors across training rounds to balance the trade-off between training convergence and privacy in an FL system under dynamic channel conditions. We formulate a stochastic optimization problem that minimizes the overall Rényi differential privacy (RDP) leakage over the entire training process, subject to a long-term constraint that ensures convergence of the global loss function. Our problem depends on unknown future information, and we observe that standard Lyapunov optimization is not applicable. Thus, we develop a new online algorithm, termed AdaScale, based on a sequence of novel per-round problems that can be solved efficiently. We further derive upper bounds on the dynamic regret and constraint violation of AdaSacle, establishing that it achieves diminishing dynamic regret in terms of time-averaged RDP leakage while ensuring convergence of FL training to a stationary point. Numerical experiments on canonical classification tasks show that our approach effectively reduces RDP and DP leakages compared with state-of-the-art benchmarks without compromising learning performance.
Problem

Research questions and friction points this paper is trying to address.

Balancing training convergence and privacy in federated learning
Adaptively designing receive scaling factors under dynamic channels
Minimizing Rényi differential privacy leakage while ensuring convergence
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive receive scaling balances privacy and convergence
Online AdaScale algorithm solves per-round optimization efficiently
Dynamic regret bounds ensure diminishing RDP leakage while training
🔎 Similar Papers
No similar papers found.