π€ AI Summary
To address the vulnerability of pretrained models to adversarial attacks, poor global model adaptability to heterogeneous client data distributions, and high communication overhead in edge federated learning, this paper proposes a two-stage collaborative fine-tuning framework. In Stage I, lightweight local adversarial fine-tuning and efficient parameter upload are achieved via Low-Rank Adaptation (LoRA). In Stage II, a game-theoretic layer selection mechanism dynamically optimizes each clientβs model accuracy on benign samples prior to aggregation. This work introduces the first personalized adversarial defense paradigm tailored for data heterogeneity, enabling dynamic trade-offs between robustness and accuracy under privacy preservation. Experiments demonstrate that, compared to state-of-the-art methods, the proposed approach reduces communication overhead by 50Γ, improves adversarial robustness by 29.5%, and increases benign-sample accuracy by 50.4%.
π Abstract
The growing adoption of large pre-trained models in edge computing has made deploying model inference on mobile clients both practical and popular. These devices are inherently vulnerable to direct adversarial attacks, which pose a substantial threat to the robustness and security of deployed models. Federated adversarial training (FAT) has emerged as an effective solution to enhance model robustness while preserving client privacy. However, FAT frequently produces a generalized global model, which struggles to address the diverse and heterogeneous data distributions across clients, resulting in insufficiently personalized performance, while also encountering substantial communication challenges during the training process. In this paper, we propose extit{Sylva}, a personalized collaborative adversarial training framework designed to deliver customized defense models for each client through a two-phase process. In Phase 1, extit{Sylva} employs LoRA for local adversarial fine-tuning, enabling clients to personalize model robustness while drastically reducing communication costs by uploading only LoRA parameters during federated aggregation. In Phase 2, a game-based layer selection strategy is introduced to enhance accuracy on benign data, further refining the personalized model. This approach ensures that each client receives a tailored defense model that balances robustness and accuracy effectively. Extensive experiments on benchmark datasets demonstrate that extit{Sylva} can achieve up to 50$ imes$ improvements in communication efficiency compared to state-of-the-art algorithms, while achieving up to 29.5% and 50.4% enhancements in adversarial robustness and benign accuracy, respectively.