NeuroStrike: Neuron-Level Attacks on Aligned LLMs

📅 2025-09-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM safety alignment methods—such as supervised fine-tuning (SFT) and reinforcement learning from human feedback (RLHF)—are vulnerable to adversarial prompt attacks; however, prevailing defenses rely on heuristic trial-and-error, suffer from poor generalizability, and lack scalability. This work is the first to uncover a critical vulnerability: alignment-induced sparsity of *safety neurons* in the model’s internal representations. We propose NeuroStrike, a novel attack framework that identifies pivotal safety neurons via feedforward activation analysis, then synergistically combines white-box neuron pruning with black-box adversarial prompt generation. NeuroStrike establishes the first neuron-level transferable attack paradigm and introduces “model fingerprinting” for black-box LLM exploitation. It demonstrates strong cross-model, cross-architecture, and cross-training generalization: achieving 76.9% attack success rate on 20+ open-weight LLMs after pruning only 0.6% of neurons; attaining a 63.7% average success rate across five proprietary models; and achieving 100% success against unsafe image inputs in multimodal settings.

Technology Category

Application Category

📝 Abstract
Safety alignment is critical for the ethical deployment of large language models (LLMs), guiding them to avoid generating harmful or unethical content. Current alignment techniques, such as supervised fine-tuning and reinforcement learning from human feedback, remain fragile and can be bypassed by carefully crafted adversarial prompts. Unfortunately, such attacks rely on trial and error, lack generalizability across models, and are constrained by scalability and reliability. This paper presents NeuroStrike, a novel and generalizable attack framework that exploits a fundamental vulnerability introduced by alignment techniques: the reliance on sparse, specialized safety neurons responsible for detecting and suppressing harmful inputs. We apply NeuroStrike to both white-box and black-box settings: In the white-box setting, NeuroStrike identifies safety neurons through feedforward activation analysis and prunes them during inference to disable safety mechanisms. In the black-box setting, we propose the first LLM profiling attack, which leverages safety neuron transferability by training adversarial prompt generators on open-weight surrogate models and then deploying them against black-box and proprietary targets. We evaluate NeuroStrike on over 20 open-weight LLMs from major LLM developers. By removing less than 0.6% of neurons in targeted layers, NeuroStrike achieves an average attack success rate (ASR) of 76.9% using only vanilla malicious prompts. Moreover, Neurostrike generalizes to four multimodal LLMs with 100% ASR on unsafe image inputs. Safety neurons transfer effectively across architectures, raising ASR to 78.5% on 11 fine-tuned models and 77.7% on five distilled models. The black-box LLM profiling attack achieves an average ASR of 63.7% across five black-box models, including the Google Gemini family.
Problem

Research questions and friction points this paper is trying to address.

Attacks bypass safety alignment in large language models
Exploits vulnerability of sparse safety neurons in LLMs
Develops neuron-level attacks for white-box and black-box settings
Innovation

Methods, ideas, or system contributions that make the work stand out.

Identifies safety neurons via activation analysis
Prunes safety neurons to disable mechanisms
Transfers adversarial prompts across models
🔎 Similar Papers
No similar papers found.