🤖 AI Summary
Existing quantum federated learning (QFL) frameworks predominantly assume client homogeneity, overlooking critical real-world heterogeneities—including quantum data distributions, encoding schemes, hardware noise profiles, and computational capabilities—leading to unstable convergence, slow training, and degraded model performance. Method: This work systematically classifies data and system heterogeneity in QFL, revealing their profound impacts on training dynamics and model aggregation; it introduces three core components: heterogeneity-aware modeling, noise-adaptive aggregation, and robust weight fusion. Contribution/Results: Through comprehensive evaluations across diverse quantum hardware and data scenarios, the proposed framework demonstrates significant improvements in convergence stability, speed, and generalization. The study not only elucidates the mechanistic role of heterogeneity in QFL but also bridges theoretical and practical gaps in existing mitigation strategies. It establishes the first analytical framework and scalable technical pathway for heterogeneous QFL, laying a rigorous foundation for robust distributed quantum machine learning.
📝 Abstract
Quantum federated learning (QFL) combines quantum computing and federated learning to enable decentralized model training while maintaining data privacy. QFL can improve computational efficiency and scalability by taking advantage of quantum properties such as superposition and entanglement. However, existing QFL frameworks largely focus on homogeneity among quantum extcolor{black}{clients, and they do not account} for real-world variances in quantum data distributions, encoding techniques, hardware noise levels, and computational capacity. These differences can create instability during training, slow convergence, and reduce overall model performance. In this paper, we conduct an in-depth examination of heterogeneity in QFL, classifying it into two categories: data or system heterogeneity. Then we investigate the influence of heterogeneity on training convergence and model aggregation. We critically evaluate existing mitigation solutions, highlight their limitations, and give a case study that demonstrates the viability of tackling quantum heterogeneity. Finally, we discuss potential future research areas for constructing robust and scalable heterogeneous QFL frameworks.