🤖 AI Summary
To address the challenge of balancing robustness, model consistency, and efficiency in Federated Adversarial Training (FAT) under memory-constrained edge devices, this paper proposes a cascaded adversarial learning framework. Our method introduces three key innovations: (1) the first strong-convex regularized cascaded adversarial learning, theoretically guaranteeing both local-global model consistency and adversarial robustness; (2) an adaptive perturbation adjustment mechanism coupled with differential module allocation, jointly optimizing the utility–robustness trade-off and objective alignment across clients; and (3) a lightweight server-side training coordinator that enables efficient local training while preserving global robustness. Extensive experiments demonstrate that our approach maintains end-to-end FAT accuracy and robustness while reducing memory overhead by 80% and accelerating training by up to 10.8× compared to baseline methods.
📝 Abstract
Federated Adversarial Training (FAT) can supplement robustness against adversarial examples to Federated Learning (FL), promoting a meaningful step toward trustworthy AI. However, FAT requires large models to preserve high accuracy while achieving strong robustness, incurring high memory-swapping latency when training on memory-constrained edge devices. Existing memory-efficient FL methods suffer from poor accuracy and weak robustness due to inconsistent local and global models. In this paper, we propose FedProphet, a novel FAT framework that can achieve memory efficiency, robustness, and consistency simultaneously. FedProphget reduces the memory requirement in local training while guaranteeing adversarial robustness by adversarial cascade learning with strong convexity regularization, and we show that the strong robustness also implies low inconsistency in FedProphet. We also develop a training coordinator on the server of FL, with Adaptive Perturbation Adjustment for utility-robustness balance and Differentiated Module Assignment for objective inconsistency mitigation. FedPeophet significantly outperforms other baselines under different experimental settings, maintaining the accuracy and robustness of end-to-end FAT with 80% memory reduction and up to 10.8x speedup in training time.