SilentStriker:Toward Stealthy Bit-Flip Attacks on Large Language Models

📅 2025-09-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Bit-Flip Attacks (BFAs) on large language models (LLMs) face a fundamental trade-off between stealth and efficacy at the hardware level. Method: This paper proposes the first stealthy BFA framework tailored for LLMs. It reconstructs the loss function via a critical output-token suppression mechanism and jointly optimizes parameter perturbations using an iterative, progressive search strategy—thereby degrading task performance significantly while preserving textual naturalness. Unlike conventional perplexity-driven approaches, it avoids anomalous outputs, substantially enhancing attack stealth. Contribution/Results: Extensive experiments across multiple state-of-the-art LLMs demonstrate that our method achieves greater task-performance degradation than existing baselines while maintaining human readability of generated text. It establishes a novel paradigm for hardware-level security evaluation of LLMs, bridging the gap between practical impact and undetectability in adversarial settings.

Technology Category

Application Category

📝 Abstract
The rapid adoption of large language models (LLMs) in critical domains has spurred extensive research into their security issues. While input manipulation attacks (e.g., prompt injection) have been well studied, Bit-Flip Attacks (BFAs) -- which exploit hardware vulnerabilities to corrupt model parameters and cause severe performance degradation -- have received far less attention. Existing BFA methods suffer from key limitations: they fail to balance performance degradation and output naturalness, making them prone to discovery. In this paper, we introduce SilentStriker, the first stealthy bit-flip attack against LLMs that effectively degrades task performance while maintaining output naturalness. Our core contribution lies in addressing the challenge of designing effective loss functions for LLMs with variable output length and the vast output space. Unlike prior approaches that rely on output perplexity for attack loss formulation, which inevitably degrade output naturalness, we reformulate the attack objective by leveraging key output tokens as targets for suppression, enabling effective joint optimization of attack effectiveness and stealthiness. Additionally, we employ an iterative, progressive search strategy to maximize attack efficacy. Experiments show that SilentStriker significantly outperforms existing baselines, achieving successful attacks without compromising the naturalness of generated text.
Problem

Research questions and friction points this paper is trying to address.

Developing stealthy bit-flip attacks that degrade LLM performance
Balancing attack effectiveness with output naturalness in BFAs
Designing effective loss functions for variable-length LLM outputs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages key output tokens for suppression
Employs iterative progressive search strategy
Reformulates attack objective for stealthiness
🔎 Similar Papers
No similar papers found.
H
Haotian Xu
Zhejiang University
Q
Qingsong Peng
Zhejiang University
J
Jie Shi
Huawei
Huadi Zheng
Huadi Zheng
Unknown affiliation
Voice TechnologyInformation Security
Y
Yu Li
Zhejiang University
Cheng Zhuo
Cheng Zhuo
Zhejiang University
EDA algorithmsVLSI design