Heterogeneity-Oblivious Robust Federated Learning

📅 2025-08-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In ultra-heterogeneous federated learning—characterized by severe statistical, system, and architectural heterogeneity—poisoning attacks are difficult to detect and model aggregation suffers from poor robustness. To address this, we propose Horus, a novel defense framework. Its core insight is that the LoRA input projection exhibits intrinsic stability under both heterogeneity and poisoning, enabling a poisoning scoring mechanism that requires no explicit heterogeneity awareness. Horus further introduces a projection-aware low-rank aggregation strategy, integrating LoRA-A feature anomaly detection with direction-consistency-based reweighting to suppress malicious update drift while preserving collaborative signals. Extensive experiments across diverse datasets, models, and attack scenarios demonstrate that Horus significantly outperforms state-of-the-art methods, achieving substantially improved robustness without compromising accuracy.

Technology Category

Application Category

📝 Abstract
Federated Learning (FL) remains highly vulnerable to poisoning attacks, especially under real-world hyper-heterogeneity, where clients differ significantly in data distributions, communication capabilities, and model architectures. Such heterogeneity not only undermines the effectiveness of aggregation strategies but also makes attacks more difficult to detect. Furthermore, high-dimensional models expand the attack surface. To address these challenges, we propose Horus, a heterogeneity-oblivious robust FL framework centered on low-rank adaptations (LoRAs). Rather than aggregating full model parameters, Horus inserts LoRAs into empirically stable layers and aggregates only LoRAs to reduce the attack surface.We uncover a key empirical observation that the input projection (LoRA-A) is markedly more stable than the output projection (LoRA-B) under heterogeneity and poisoning. Leveraging this, we design a Heterogeneity-Oblivious Poisoning Score using the features from LoRA-A to filter poisoned clients. For the remaining benign clients, we propose projection-aware aggregation mechanism to preserve collaborative signals while suppressing drifts, which reweights client updates by consistency with the global directions. Extensive experiments across diverse datasets, model architectures, and attacks demonstrate that Horus consistently outperforms state-of-the-art baselines in both robustness and accuracy.
Problem

Research questions and friction points this paper is trying to address.

FL vulnerability to poisoning attacks under hyper-heterogeneity
High-dimensional models expanding attack surface in FL
Need for robust aggregation to detect and filter poisoned clients
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses low-rank adaptations (LoRAs) for robust FL
Filters poisoned clients via LoRA-A stability
Employs projection-aware aggregation for benign clients
🔎 Similar Papers
No similar papers found.
W
Weiyao Zhang
Institute of Computing Technology, Chinese Academy of Sciences, China
J
Jinyang Li
Institute of Computing Technology, Chinese Academy of Sciences, China
Q
Qi Song
Institute of Computing Technology, Chinese Academy of Sciences, China
M
Miao Wang
University of Chinese Academy of Sciences, China
C
Chungang Lin
Institute of Computing Technology, Chinese Academy of Sciences, China
H
Haitong Luo
Institute of Computing Technology, Chinese Academy of Sciences, China
Xuying Meng
Xuying Meng
Institute of Computing Technology, Chinese Academy of Sciences
Y
Yujun Zhang
University of Chinese Academy of Sciences, China