FedStrategist: A Meta-Learning Framework for Adaptive and Robust Aggregation in Federated Learning

📅 2025-07-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In federated learning, model aggregation is vulnerable to adaptive model poisoning attacks, and existing static defenses lack robustness under data heterogeneity and dynamic threat landscapes. To address this, we propose a meta-learning-driven dynamic robust aggregation framework that formulates defense selection as a real-time, cost-aware control problem. We introduce a lightweight contextual bandit agent that dynamically schedules the optimal defense algorithm based on runtime diagnostic metrics—including gradient anomaly scores and client contribution entropy. The framework supports explicit trade-offs between security and model performance via an adjustable “risk tolerance” parameter. Extensive experiments demonstrate that our approach significantly outperforms single static defenses across diverse attacks (e.g., LIE, Min-Max) and heterogeneous data settings. It effectively mitigates stealthy poisoning while preserving global model integrity and convergence stability.

Technology Category

Application Category

📝 Abstract
Federated Learning (FL) offers a paradigm for privacy-preserving collaborative AI, but its decentralized nature creates significant vulnerabilities to model poisoning attacks. While numerous static defenses exist, their effectiveness is highly context-dependent, often failing against adaptive adversaries or in heterogeneous data environments. This paper introduces FedStrategist, a novel meta-learning framework that reframes robust aggregation as a real-time, cost-aware control problem. We design a lightweight contextual bandit agent that dynamically selects the optimal aggregation rule from an arsenal of defenses based on real-time diagnostic metrics. Through comprehensive experiments, we demonstrate that no single static rule is universally optimal. We show that our adaptive agent successfully learns superior policies across diverse scenarios, including a ``Krum-favorable" environment and against a sophisticated "stealth" adversary designed to neutralize specific diagnostic signals. Critically, we analyze the paradoxical scenario where a non-robust baseline achieves high but compromised accuracy, and demonstrate that our agent learns a conservative policy to prioritize model integrity. Furthermore, we prove the agent's policy is controllable via a single "risk tolerance" parameter, allowing practitioners to explicitly manage the trade-off between performance and security. Our work provides a new, practical, and analyzable approach to creating resilient and intelligent decentralized AI systems.
Problem

Research questions and friction points this paper is trying to address.

Dynamic selection of optimal aggregation in federated learning
Adaptive defense against model poisoning attacks
Balancing performance and security in decentralized AI
Innovation

Methods, ideas, or system contributions that make the work stand out.

Meta-learning framework for adaptive aggregation
Dynamic selection of optimal defense rules
Controllable risk-performance trade-off parameter
🔎 Similar Papers
No similar papers found.
M
Md Rafid Haque
Department of Computer Science and Engineering, Islamic University of Technology (IUT), Boardbazar, Gazipur - 1704, Bangladesh
Abu Raihan Mostofa Kamal
Abu Raihan Mostofa Kamal
Professor of Computer Science, Islamic University of Technology (IUT)
Data AnalyticsSecurityIoT
M
Md. Azam Hossain
Department of Computer Science and Engineering, Islamic University of Technology (IUT), Boardbazar, Gazipur - 1704, Bangladesh