๐ค AI Summary
Federated learning suffers from poor robustness under non-IID data and malicious clients, while existing Shapley-value-based contribution evaluation methods incur prohibitive computational overhead and lack scalability. To address this, we propose FedIFโa lightweight, trajectory-based influence estimation framework for client contribution assessment. FedIF efficiently approximates each clientโs contribution to the global model by analyzing gradient trajectories on a public validation set, incorporating local weight normalization and influence score smoothing. We theoretically establish a tighter bound on the global loss variation under noise compared to prior approaches. Extensive experiments on CIFAR-10 and Fashion-MNIST demonstrate that FedIF significantly outperforms Shapley-based methods in robustness against label noise, gradient perturbations, and adversarial attacks. Moreover, it reduces model aggregation overhead by up to 450ร, enabling scalable and reliable federated learning in heterogeneous and hostile environments.
๐ Abstract
Federated learning (FL) faces persistent robustness challenges due to non-IID data distributions and adversarial client behavior. A promising mitigation strategy is contribution evaluation, which enables adaptive aggregation by quantifying each client's utility to the global model. However, state-of-the-art Shapley-value-based approaches incur high computational overhead due to repeated model reweighting and inference, which limits their scalability. We propose FedIF, a novel FL aggregation framework that leverages trajectory-based influence estimation to efficiently compute client contributions. FedIF adapts decentralized FL by introducing normalized and smoothed influence scores computed from lightweight gradient operations on client updates and a public validation set. Theoretical analysis demonstrates that FedIF yields a tighter bound on one-step global loss change under noisy conditions. Extensive experiments on CIFAR-10 and Fashion-MNIST show that FedIF achieves robustness comparable to or exceeding SV-based methods in the presence of label noise, gradient noise, and adversarial samples, while reducing aggregation overhead by up to 450x. Ablation studies confirm the effectiveness of FedIF's design choices, including local weight normalization and influence smoothing. Our results establish FedIF as a practical, theoretically grounded, and scalable alternative to Shapley-value-based approaches for efficient and robust FL in real-world deployments.