Sponge Attacks on Sensing AI: Energy-Latency Vulnerabilities and Defense via Model Pruning

📅 2025-05-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work first reveals the dual threat posed by sponge attacks to wearable-sensing AI models (e.g., CNNs/LSTMs trained on IMU/ECG data): significant increases in energy consumption and inference latency—endangering IoT device battery lifetime and real-time performance. To address this, we propose a lightweight structured pruning defense mechanism, which—uniquely—demonstrates that pruning not only reduces model parameters but also enhances robustness against energy- and latency-oriented adversarial attacks. We establish a comprehensive framework for sponge attack generation, modeling, and joint energy-efficiency–latency evaluation tailored to sensing AI. Extensive experiments across multiple benchmarks show that pruned models achieve >40% improvement in attack resilience, reduce attack-induced energy overhead by 62%, and mitigate latency inflation by 55%, while preserving ≥98% of original accuracy.

Technology Category

Application Category

📝 Abstract
Recent studies have shown that sponge attacks can significantly increase the energy consumption and inference latency of deep neural networks (DNNs). However, prior work has focused primarily on computer vision and natural language processing tasks, overlooking the growing use of lightweight AI models in sensing-based applications on resource-constrained devices, such as those in Internet of Things (IoT) environments. These attacks pose serious threats of energy depletion and latency degradation in systems where limited battery capacity and real-time responsiveness are critical for reliable operation. This paper makes two key contributions. First, we present the first systematic exploration of energy-latency sponge attacks targeting sensing-based AI models. Using wearable sensing-based AI as a case study, we demonstrate that sponge attacks can substantially degrade performance by increasing energy consumption, leading to faster battery drain, and by prolonging inference latency. Second, to mitigate such attacks, we investigate model pruning, a widely adopted compression technique for resource-constrained AI, as a potential defense. Our experiments show that pruning-induced sparsity significantly improves model resilience against sponge poisoning. We also quantify the trade-offs between model efficiency and attack resilience, offering insights into the security implications of model compression in sensing-based AI systems deployed in IoT environments.
Problem

Research questions and friction points this paper is trying to address.

Sponge attacks increase energy and latency in sensing AI models
Lightweight AI in IoT devices is vulnerable to energy depletion
Model pruning enhances resilience against sponge poisoning attacks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Model pruning defends against sponge attacks
Pruning-induced sparsity enhances attack resilience
Balances model efficiency and security trade-offs
🔎 Similar Papers
No similar papers found.