π€ AI Summary
In federated learning, adversarial training is hindered by strict privacy constraints and limited computational resources on edge devices. Method: This paper proposes an implicit proxy data synthesis approach that requires no raw client data. It implicitly distills the underlying data distribution from clientsβ model parameter update trajectories, constructing a high-fidelity, privacy-preserving proxy dataset. The method integrates gradient trajectory inversion, implicit data synthesis, and a meta-optimization framework, augmented with adversarial transferability modeling and trajectory regularization. Contribution/Results: By decoupling robustness enhancement from on-device computation, the approach achieves an 8.2% improvement in PGD-20 accuracy on CIFAR-10/100 while imposing zero additional computational overhead on clients.
π Abstract
Deep learning models deployed on edge devices are increasingly used in safety-critical applications. However, their vulnerability to adversarial perturbations poses significant risks, especially in Federated Learning (FL) settings where identical models are distributed across thousands of clients. While adversarial training is a strong defense, it is difficult to apply in FL due to strict client-data privacy constraints and the limited compute available on edge devices. In this work, we introduce TrajSyn, a privacy-preserving framework that enables effective server-side adversarial training by synthesizing a proxy dataset from the trajectories of client model updates, without accessing raw client data. We show that TrajSyn consistently improves adversarial robustness on image classification benchmarks with no extra compute burden on the client device.