Q-SafeML: Safety Assessment of Quantum Machine Learning via Quantum Distance Metrics

📅 2025-09-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Quantum machine learning (QML) lacks dedicated safety monitoring mechanisms for safety-critical systems. Method: This paper proposes Q-SafeML—the first framework to extend safety monitoring from classical to quantum machine learning—by leveraging model-dependent quantum state space distance metrics (e.g., fidelity, trace distance) to detect concept drift between training and deployment data in real time during the post-classification stage. Unlike conventional data-driven, classifier-agnostic approaches, Q-SafeML is explicitly designed for quantum models, supporting mainstream architectures including quantum convolutional neural networks (QCNNs) and variational quantum circuits (VQCs). Contribution/Results: Experiments demonstrate that Q-SafeML effectively identifies anomalous concept drift, enhances system transparency and interpretability, provides reliable evidence for human-in-the-loop decision-making, and significantly improves the runtime safety of QML systems.

Technology Category

Application Category

📝 Abstract
The rise of machine learning in safety-critical systems has paralleled advancements in quantum computing, leading to the emerging field of Quantum Machine Learning (QML). While safety monitoring has progressed in classical ML, existing methods are not directly applicable to QML due to fundamental differences in quantum computation. Given the novelty of QML, dedicated safety mechanisms remain underdeveloped. This paper introduces Q-SafeML, a safety monitoring approach for QML. The method builds on SafeML, a recent method that utilizes statistical distance measures to assess model accuracy and provide confidence in the reasoning of an algorithm. An adapted version of Q-SafeML incorporates quantum-centric distance measures, aligning with the probabilistic nature of QML outputs. This shift to a model-dependent, post-classification evaluation represents a key departure from classical SafeML, which is dataset-driven and classifier-agnostic. The distinction is motivated by the unique representational constraints of quantum systems, requiring distance metrics defined over quantum state spaces. Q-SafeML detects distances between operational and training data addressing the concept drifts in the context of QML. Experiments on QCNN and VQC Models show that this enables informed human oversight, enhancing system transparency and safety.
Problem

Research questions and friction points this paper is trying to address.

Assess safety of quantum machine learning models
Detect concept drift in quantum computing environments
Develop quantum-specific distance metrics for monitoring
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses quantum distance metrics for safety
Adapts SafeML with quantum-centric measures
Detects concept drifts in quantum data
🔎 Similar Papers
No similar papers found.