Exploiting Efficiency Vulnerabilities in Dynamic Deep Learning Systems

📅 2025-06-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Dynamic deep learning systems (DDLSs) face novel efficiency-security risks due to input-dependent dynamic execution paths: adversarial inputs can trigger high-latency, high-energy paths, leading to resource exhaustion and denial-of-service—severely compromising real-time deployment safety. This work systematically identifies, for the first time, efficiency vulnerabilities in mainstream DDLS architectures (e.g., SkipNet, BranchyNet), proposes an efficiency attack methodology based on dynamic path analysis and adversarial sample generation, and establishes a fine-grained resource monitoring and evaluation framework. We innovatively design an execution-mode-constrained defense mechanism that, without sacrificing accuracy or adaptivity, reduces attack-induced latency and energy consumption increases by 82.3% and 76.5%, respectively. Extensive experiments confirm the prevalence of exploitable efficiency vulnerabilities across diverse dynamic models and demonstrate the practical deployability of our defense prototype.

Technology Category

Application Category

📝 Abstract
The growing deployment of deep learning models in real-world environments has intensified the need for efficient inference under strict latency and resource constraints. To meet these demands, dynamic deep learning systems (DDLSs) have emerged, offering input-adaptive computation to optimize runtime efficiency. While these systems succeed in reducing cost, their dynamic nature introduces subtle and underexplored security risks. In particular, input-dependent execution pathways create opportunities for adversaries to degrade efficiency, resulting in excessive latency, energy usage, and potential denial-of-service in time-sensitive deployments. This work investigates the security implications of dynamic behaviors in DDLSs and reveals how current systems expose efficiency vulnerabilities exploitable by adversarial inputs. Through a survey of existing attack strategies, we identify gaps in the coverage of emerging model architectures and limitations in current defense mechanisms. Building on these insights, we propose to examine the feasibility of efficiency attacks on modern DDLSs and develop targeted defenses to preserve robustness under adversarial conditions.
Problem

Research questions and friction points this paper is trying to address.

Investigates security risks in dynamic deep learning systems
Identifies efficiency vulnerabilities from adversarial inputs
Proposes defenses for robustness in adversarial conditions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic deep learning systems adapt computation for efficiency
Adversarial inputs exploit efficiency vulnerabilities in DDLS
Proposed defenses target robustness against efficiency attacks
🔎 Similar Papers
No similar papers found.