FedProphet: Memory-Efficient Federated Adversarial Training via Robust and Consistent Cascade Learning

📅 2024-09-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of balancing robustness, model consistency, and efficiency in Federated Adversarial Training (FAT) under memory-constrained edge devices, this paper proposes a cascaded adversarial learning framework. Our method introduces three key innovations: (1) the first strong-convex regularized cascaded adversarial learning, theoretically guaranteeing both local-global model consistency and adversarial robustness; (2) an adaptive perturbation adjustment mechanism coupled with differential module allocation, jointly optimizing the utility–robustness trade-off and objective alignment across clients; and (3) a lightweight server-side training coordinator that enables efficient local training while preserving global robustness. Extensive experiments demonstrate that our approach maintains end-to-end FAT accuracy and robustness while reducing memory overhead by 80% and accelerating training by up to 10.8× compared to baseline methods.

Technology Category

Application Category

📝 Abstract
Federated Adversarial Training (FAT) can supplement robustness against adversarial examples to Federated Learning (FL), promoting a meaningful step toward trustworthy AI. However, FAT requires large models to preserve high accuracy while achieving strong robustness, incurring high memory-swapping latency when training on memory-constrained edge devices. Existing memory-efficient FL methods suffer from poor accuracy and weak robustness due to inconsistent local and global models. In this paper, we propose FedProphet, a novel FAT framework that can achieve memory efficiency, robustness, and consistency simultaneously. FedProphget reduces the memory requirement in local training while guaranteeing adversarial robustness by adversarial cascade learning with strong convexity regularization, and we show that the strong robustness also implies low inconsistency in FedProphet. We also develop a training coordinator on the server of FL, with Adaptive Perturbation Adjustment for utility-robustness balance and Differentiated Module Assignment for objective inconsistency mitigation. FedPeophet significantly outperforms other baselines under different experimental settings, maintaining the accuracy and robustness of end-to-end FAT with 80% memory reduction and up to 10.8x speedup in training time.
Problem

Research questions and friction points this paper is trying to address.

Reduces memory usage in federated adversarial training
Ensures robustness and consistency in edge device training
Balances utility and robustness with adaptive strategies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adversarial cascade learning for robustness
Strong convexity regularization reduces inconsistency
Adaptive Perturbation Adjustment balances utility-robustness
🔎 Similar Papers
No similar papers found.
Minxue Tang
Minxue Tang
Duke University
Machine LearningDeep Learning
Yitu Wang
Yitu Wang
Department of Electrical and Computer Engineering, Duke University
J
Jingyang Zhang
Department of Electrical and Computer Engineering, Duke University
L
Louis DiValentin
Cyber Security Lab, Accenture
Aolin Ding
Aolin Ding
Security Research Scientist, Accenture
A
Amin Hass
Cyber Security Lab, Accenture
Y
Yiran Chen
Department of Electrical and Computer Engineering, Duke University