π€ AI Summary
This work proposes SafeNeuron, a novel framework that achieves safety alignment at the neuronal levelβa first in the field. Addressing the vulnerability of current large language models to neuron-level attacks and their lack of fine-grained control over internal safety representations, SafeNeuron identifies critical safety-related neurons through cross-layer analysis and freezes their weights during preference optimization. This encourages the model to develop redundant and stable safety representations. The approach substantially enhances robustness against attacks such as pruning across diverse architectures and modalities, effectively preventing malicious reuse of open-source models for generating harmful content while preserving general capabilities. These findings reveal that safe behavior is governed by shared, stable internal neuronal representations.
π Abstract
Large language models (LLMs) and multimodal LLMs are typically safety-aligned before release to prevent harmful content generation. However, recent studies show that safety behaviors are concentrated in a small subset of parameters, making alignment brittle and easily bypassed through neuron-level attacks. Moreover, most existing alignment methods operate at the behavioral level, offering limited control over the model's internal safety mechanisms. In this work, we propose SafeNeuron, a neuron-level safety alignment framework that improves robustness by redistributing safety representations across the network. SafeNeuron first identifies safety-related neurons, then freezes these neurons during preference optimization to prevent reliance on sparse safety pathways and force the model to construct redundant safety representations. Extensive experiments across models and modalities demonstrate that SafeNeuron significantly improves robustness against neuron pruning attacks, reduces the risk of open-source models being repurposed as red-team generators, and preserves general capabilities. Furthermore, our layer-wise analysis reveals that safety behaviors are governed by stable and shared internal representations. Overall, SafeNeuron provides an interpretable and robust perspective for model alignment.