Adaptive and Robust Data Poisoning Detection and Sanitization in Wearable IoT Systems using Large Language Models

📅 2025-11-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Human activity recognition (HAR) in wearable Internet-of-Things (IoT) systems is highly vulnerable to data poisoning attacks, while conventional defenses rely heavily on large-scale labeled datasets and exhibit poor adaptability. Method: This paper proposes the first large language model (LLM)-driven zero-shot/few-shot detection and purification framework tailored for wearable-device data security. It innovatively integrates role-playing prompting with chain-of-thought reasoning to enable sensor-level anomaly detection and adaptive data restoration—without requiring task-specific training data. Contribution/Results: Extensive experiments demonstrate that the framework significantly outperforms baselines across key metrics—including detection accuracy, purification quality, end-to-end latency, and communication overhead—thereby enhancing the security and robustness of HAR systems in dynamic IoT environments.

Technology Category

Application Category

📝 Abstract
The widespread integration of wearable sensing devices in Internet of Things (IoT) ecosystems, particularly in healthcare, smart homes, and industrial applications, has required robust human activity recognition (HAR) techniques to improve functionality and user experience. Although machine learning models have advanced HAR, they are increasingly susceptible to data poisoning attacks that compromise the data integrity and reliability of these systems. Conventional approaches to defending against such attacks often require extensive task-specific training with large, labeled datasets, which limits adaptability in dynamic IoT environments. This work proposes a novel framework that uses large language models (LLMs) to perform poisoning detection and sanitization in HAR systems, utilizing zero-shot, one-shot, and few-shot learning paradigms. Our approach incorporates extit{role play} prompting, whereby the LLM assumes the role of expert to contextualize and evaluate sensor anomalies, and extit{think step-by-step} reasoning, guiding the LLM to infer poisoning indicators in the raw sensor data and plausible clean alternatives. These strategies minimize reliance on curation of extensive datasets and enable robust, adaptable defense mechanisms in real-time. We perform an extensive evaluation of the framework, quantifying detection accuracy, sanitization quality, latency, and communication cost, thus demonstrating the practicality and effectiveness of LLMs in improving the security and reliability of wearable IoT systems.
Problem

Research questions and friction points this paper is trying to address.

Detecting data poisoning attacks in wearable IoT human activity recognition systems
Reducing reliance on large labeled datasets for IoT security defenses
Improving real-time security and reliability of dynamic wearable IoT environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs detect data poisoning via role play
Step-by-step reasoning infers sensor anomalies
Zero-shot learning enables real-time sanitization
🔎 Similar Papers
No similar papers found.
W
W. Mithsara
Southern Illinois University, USA
N
Ning Yang
Southern Illinois University, USA
Ahmed Imteaj
Ahmed Imteaj
Assistant Professor, Florida Atlantic University
Robust and Secure AIMultimodal LLMsFederated LearningCybersecurity
Hussein Zangoti
Hussein Zangoti
Florida International University, Jazan University
A
Abdur R. Shahid
Southern Illinois University, USA