Edge Association Strategies for Synthetic Data Empowered Hierarchical Federated Learning with Non-IID Data

📅 2025-06-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address slow convergence and low client participation in hierarchical federated learning (HFL) under non-IID data distributions, this paper proposes a synergistic optimization framework integrating synthetic data augmentation and edge-server-based incentives. Specifically, lightweight synthetic data are generated at the edge server to mitigate statistical heterogeneity, and a joint training mechanism is designed that fuses local and synthetic data. Additionally, a resource-cost-aware edge-client association strategy is established, coupled with an incentive mechanism to enhance client collaboration willingness. Experimental results demonstrate that the proposed approach significantly reduces communication rounds—by an average of 32.7%—accelerates model convergence, improves classification accuracy by up to 5.8% under non-IID settings, and substantially increases long-term client participation rates.

Technology Category

Application Category

📝 Abstract
In recent years, Federated Learning (FL) has emerged as a widely adopted privacy-preserving distributed training approach, attracting significant interest from both academia and industry. Research efforts have been dedicated to improving different aspects of FL, such as algorithm improvement, resource allocation, and client selection, to enable its deployment in distributed edge networks for practical applications. One of the reasons for the poor FL model performance is due to the worker dropout during training as the FL server may be located far away from the FL workers. To address this issue, an Hierarchical Federated Learning (HFL) framework has been introduced, incorporating an additional layer of edge servers to relay communication between the FL server and workers. While the HFL framework improves the communication between the FL server and workers, large number of communication rounds may still be required for model convergence, particularly when FL workers have non-independent and identically distributed (non-IID) data. Moreover, the FL workers are assumed to fully cooperate in the FL training process, which may not always be true in practical situations. To overcome these challenges, we propose a synthetic-data-empowered HFL framework that mitigates the statistical issues arising from non-IID local datasets while also incentivizing FL worker participation. In our proposed framework, the edge servers reward the FL workers in their clusters for facilitating the FL training process. To improve the performance of the FL model given the non-IID local datasets of the FL workers, the edge servers generate and distribute synthetic datasets to FL workers within their clusters. FL workers determine which edge server to associate with, considering the computational resources required to train on both their local datasets and the synthetic datasets.
Problem

Research questions and friction points this paper is trying to address.

Improving FL model performance with non-IID data
Reducing communication rounds in Hierarchical FL
Incentivizing worker participation in FL training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchical Federated Learning with edge servers
Synthetic data for non-IID data mitigation
Edge association with computational resource consideration
🔎 Similar Papers
No similar papers found.