Light Alignment Improves LLM Safety via Model Self-Reflection with a Single Neuron

πŸ“… 2026-02-02
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Current safety alignment methods for large language models either incur high computational costs and poor generalization or rely on predefined rules and the model’s intrinsic capabilities, struggling to balance efficiency and universality. This work proposes a safety-aware decoding mechanism based on single-neuron gating, which dynamically integrates the model’s internal reasoning with external safety guidance during generation through lightweight expert model training, enabling efficient self-reflective safety control. Remarkably, the approach requires only a single neuron to make safety decisions and consistently outperforms existing lightweight alignment strategies across multiple model scales. It significantly enhances safety while preserving output utility, substantially reduces training overhead, and demonstrates strong cross-model generalization capability.

Technology Category

Application Category

πŸ“ Abstract
The safety of large language models (LLMs) has increasingly emerged as a fundamental aspect of their development. Existing safety alignment for LLMs is predominantly achieved through post-training methods, which are computationally expensive and often fail to generalize well across different models. A small number of lightweight alignment approaches either rely heavily on prior-computed safety injections or depend excessively on the model's own capabilities, resulting in limited generalization and degraded efficiency and usability during generation. In this work, we propose a safety-aware decoding method that requires only low-cost training of an expert model and employs a single neuron as a gating mechanism. By effectively balancing the model's intrinsic capabilities with external guidance, our approach simultaneously preserves utility and enhances output safety. It demonstrates clear advantages in training overhead and generalization across model scales, offering a new perspective on lightweight alignment for the safe and practical deployment of large language models. Code: https://github.com/Beijing-AISI/NGSD.
Problem

Research questions and friction points this paper is trying to address.

LLM safety
alignment
lightweight methods
generalization
computational cost
Innovation

Methods, ideas, or system contributions that make the work stand out.

lightweight alignment
single neuron gating
safety-aware decoding
model self-reflection
LLM safety
πŸ”Ž Similar Papers
No similar papers found.
S
Sicheng Shen
Beijing Institute of AI Safety and Governance (Beijing AISI), Beijing Key Laboratory of Safe AI and Super-alignment, BrainCog Lab., CASIA, Zhongguancun Academy, UCAS
M
Mingyang Lv
Beijing Institute of AI Safety and Governance (Beijing AISI), Beijing Key Laboratory of Safe AI and Super-alignment, BrainCog Lab., CASIA, UCAS
Han Shen
Han Shen
Research Engineer, Ant Group; Ph.D., Rensselaer Polytechnic Institute
OptimizationReinforcement LearningAlignment
J
Jialin Wu
Ant Group Co., Ltd.
B
Binghao Wang
Ant Group Co., Ltd.
Z
Zhou Yang
Ant Group Co., Ltd.
Guobin Shen
Guobin Shen
Institute of Automation, Chinese Academy of Sciences
bio-inspired neural networksspiking neural networksmachine learningcognitive science
Dongcheng Zhao
Dongcheng Zhao
Beijing Institute of AI Safety and Governance
Spiking Neural NetworksEvent Based VisionBrain-inspired AILLM Safety
F
Feifei Zhao
Beijing Institute of AI Safety and Governance (Beijing AISI), Beijing Key Laboratory of Safe AI and Super-alignment, BrainCog Lab., CASIA, UCAS
Yi Zeng
Yi Zeng
Institute of Automation, Chinese Academy of Sciences
Brain-inspired AIAI SafetyAI Ethics and Governance