Cost-TrustFL: Cost-Aware Hierarchical Federated Learning with Lightweight Reputation Evaluation across Multi-Cloud

📅 2025-12-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multi-cloud federated learning faces three key challenges: non-IID data distributions, insufficient robustness against poisoning attacks, and high cross-cloud communication costs (e.g., egress fees). To address these, we propose a hierarchical federated learning framework that jointly optimizes model accuracy and economic efficiency. Our method introduces a linear-complexity gradient-approximated Shapley value mechanism for lightweight malicious participant reputation scoring; designs the first cost-aware hierarchical aggregation strategy—prioritizing intra-cloud aggregation to drastically reduce egress traffic; and integrates Byzantine-robust aggregation to enhance security. Evaluated on CIFAR-10 and FEMNIST under 30% malicious clients, our approach achieves 86.7% test accuracy while reducing communication costs by 32%. Moreover, it demonstrates strong robustness to varying degrees of non-IIDness and poisoning attack intensity.

Technology Category

Application Category

📝 Abstract
Federated learning across multi-cloud environments faces critical challenges, including non-IID data distributions, malicious participant detection, and substantial cross-cloud communication costs (egress fees). Existing Byzantine-robust methods focus primarily on model accuracy while overlooking the economic implications of data transfer across cloud providers. This paper presents Cost-TrustFL, a hierarchical federated learning framework that jointly optimizes model performance and communication costs while providing robust defense against poisoning attacks. We propose a gradient-based approximate Shapley value computation method that reduces the complexity from exponential to linear, enabling lightweight reputation evaluation. Our cost-aware aggregation strategy prioritizes intra-cloud communication to minimize expensive cross-cloud data transfers. Experiments on CIFAR-10 and FEMNIST datasets demonstrate that Cost-TrustFL achieves 86.7% accuracy under 30% malicious clients while reducing communication costs by 32% compared to baseline methods. The framework maintains stable performance across varying non-IID degrees and attack intensities, making it practical for real-world multi-cloud deployments.
Problem

Research questions and friction points this paper is trying to address.

Optimizes model performance and communication costs in multi-cloud federated learning.
Provides robust defense against poisoning attacks from malicious participants.
Reduces cross-cloud data transfer expenses through cost-aware aggregation strategies.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchical federated learning optimizes performance and communication costs
Gradient-based Shapley value reduces complexity for lightweight reputation evaluation
Cost-aware aggregation prioritizes intra-cloud to minimize cross-cloud transfers
🔎 Similar Papers
No similar papers found.
J
Jixiao Yang
Westcliff University, Irvine, CA, USA
Jinyu Chen
Jinyu Chen
The Hong Kong Polytechnic University
Edge/cloud computingVideo transmission.
Z
Zixiao Huang
University of Washington, Seattle, WA, USA
C
Chengda Xu
University of Washington, Seattle, WA, USA
C
Chi Zhang
Northeastern University, Boston, MA, USA
Sijia Li
Sijia Li
Institute of Information Engineering, Chinese Academy of Sciences