Controllable Value Alignment in Large Language Models through Neuron-Level Editing

📅 2026-02-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current large language models often inadvertently activate non-target values during value alignment, exhibiting limited controllability. To address this, this work proposes NeVA, a framework grounded in Schwartz’s theory of basic human values, which identifies sparse neurons associated with specific values at the neuronal level and enables fine-grained alignment through activation editing at inference time—without requiring parameter updates. Introducing the concept of “value leakage” along with a normalized metric to quantify it, NeVA substantially enhances both controllability and interpretability of value alignment: it amplifies the expression of target values while significantly reducing average value leakage, with minimal impact on the model’s general capabilities. Residual effects are largely confined to semantically related value categories.

Technology Category

Application Category

📝 Abstract
Aligning large language models (LLMs) with human values has become increasingly important as their influence on human behavior and decision-making expands. However, existing steering-based alignment methods suffer from limited controllability: steering a target value often unintentionally activates other, non-target values. To characterize this limitation, we introduce value leakage, a diagnostic notion that captures the unintended activation of non-target values during value steering, along with a normalized leakage metric grounded in Schwartz's value theory. In light of this analysis, we propose NeVA, a neuron-level editing framework for controllable value alignment in LLMs. NeVA identifies sparse, value-relevant neurons and performs inference-time activation editing, enabling fine-grained control without parameter updates or retraining. Experiments show that NeVA achieves stronger target value alignment while incurring smaller performance degradation on general capability. Moreover, NeVA significantly reduces the average leakage, with residual effects largely confined to semantically related value classes. Overall, NeVA offers a more controllable and interpretable mechanism for value alignment.
Problem

Research questions and friction points this paper is trying to address.

value alignment
value leakage
controllability
large language models
neuron-level editing
Innovation

Methods, ideas, or system contributions that make the work stand out.

value leakage
neuron-level editing
controllable alignment
inference-time editing
Schwartz's value theory
🔎 Similar Papers
No similar papers found.