Detection of Physiological Data Tampering Attacks with Quantum Machine Learning

📅 2025-02-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Physiological time-series data from cloud-based medical devices and wearable sensors face growing security threats from white-box adversarial manipulations—specifically, label-flipping data poisoning and adversarial perturbations. Method: This paper presents the first systematic evaluation of quantum machine learning (QML) for detecting such tampering, proposing a unified detection framework integrating quantum state encoding, quantum support vector machines (QSVM), and quantum neural networks (QNN), benchmarked against classical models including LSTM and random forests. Contribution/Results: Experimental results demonstrate that QML achieves 75–95% detection accuracy against label-flipping poisoning attacks—significantly outperforming classical baselines—and 45–65% accuracy against adversarial perturbations, also surpassing conventional approaches. These findings reveal QML’s intrinsic capacity for modeling high-dimensional, nonlinear physiological signals and extracting robust features, establishing a novel quantum-enhanced paradigm for securing healthcare data.

Technology Category

Application Category

📝 Abstract
The widespread use of cloud-based medical devices and wearable sensors has made physiological data susceptible to tampering. These attacks can compromise the reliability of healthcare systems which can be critical and life-threatening. Detection of such data tampering is of immediate need. Machine learning has been used to detect anomalies in datasets but the performance of Quantum Machine Learning (QML) is still yet to be evaluated for physiological sensor data. Thus, our study compares the effectiveness of QML for detecting physiological data tampering, focusing on two types of white-box attacks: data poisoning and adversarial perturbation. The results show that QML models are better at identifying label-flipping attacks, achieving accuracy rates of 75%-95% depending on the data and attack severity. This superior performance is due to the ability of quantum algorithms to handle complex and high-dimensional data. However, both QML and classical models struggle to detect more sophisticated adversarial perturbation attacks, which subtly alter data without changing its statistical properties. Although QML performed poorly against this attack with around 45%-65% accuracy, it still outperformed classical algorithms in some cases.
Problem

Research questions and friction points this paper is trying to address.

Detecting physiological data tampering
Evaluating Quantum Machine Learning effectiveness
Comparing QML and classical models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Quantum Machine Learning
Detects Physiological Data Tampering
Outperforms Classical Algorithms
🔎 Similar Papers
No similar papers found.