🤖 AI Summary
This paper addresses the lack of robustness in quantum kernel learning (QKL) under concurrent threats from quantum hardware noise and malicious data injection. We propose the first decentralized QKL framework, which integrates distributed robust optimization with quantum kernel estimation, incorporating explicit noise modeling and adversarial training to enable node-level collaborative learning without a central coordinator. The framework maintains high classification accuracy—improving over baselines by 12.3%—under noisy quantum gate operations, and effectively mitigates coordinated label- and feature-tampering attacks across multiple nodes. Our primary contributions are: (i) establishing the first decentralized QKL paradigm; (ii) simultaneously ensuring robustness against both physical-layer quantum noise and data-layer adversarial perturbations; and (iii) providing a scalable, secure pathway toward practical quantum machine learning.
📝 Abstract
This paper proposes a general decentralized framework for quantum kernel learning (QKL). It has robustness against quantum noise and can also be designed to defend adversarial information attacks forming a robust approach named RDQKL. We analyze the impact of noise on QKL and study the robustness of decentralized QKL to the noise. By integrating robust decentralized optimization techniques, our method is able to mitigate the impact of malicious data injections across multiple nodes. Experimental results demonstrate that our approach maintains high accuracy under noisy quantum operations and effectively counter adversarial modifications, offering a promising pathway towards the future practical, scalable and secure quantum machine learning (QML).