π€ AI Summary
Quantum federated learning (QFL) faces challenges stemming from client-side quantum hardware heterogeneity and varying noise profiles, hindering uniform circuit depth assignment and compromising training efficacy. To address this, we propose Quorusβa framework enabling collaborative training across heterogeneous quantum devices. Quorus employs inter-layer loss functions to adaptively select optimal model depth per client; integrates variational quantum circuits with mid-circuit measurement feedback and a customized federated aggregation strategy; and jointly optimizes privacy preservation, measurement efficiency, qubit resource utilization, and parameter scalability. Evaluated on both simulation environments and real quantum hardware (IBM Quantum), Quorus achieves an average 12.4% improvement in test accuracy over baseline methods. It significantly enhances gradient stability for high-depth clients and outperforms existing state-of-the-art approaches.
π Abstract
Quantum machine learning (QML) holds the promise to solve classically intractable problems, but, as critical data can be fragmented across private clients, there is a need for distributed QML in a quantum federated learning (QFL) format. However, the quantum computers that different clients have access to can be error-prone and have heterogeneous error properties, requiring them to run circuits of different depths. We propose a novel solution to this QFL problem, Quorus, that utilizes a layerwise loss function for effective training of varying-depth quantum models, which allows clients to choose models for high-fidelity output based on their individual capacity. Quorus also presents various model designs based on client needs that optimize for shot budget, qubit count, midcircuit measurement, and optimization space. Our simulation and real-hardware results show the promise of Quorus: it increases the magnitude of gradients of higher depth clients and improves testing accuracy by 12.4% on average over the state-of-the-art.