🤖 AI Summary
This study investigates whether large language models (LLMs) harbor a brain-inspired sparse reward subsystem and examines its role in reasoning. By analyzing hidden states, the authors identify, for the first time, “value neurons” that encode internal state values and “dopamine neurons” that represent reward prediction errors. Through a combination of neuron intervention, activation analysis, cross-model transfer, and reward prediction error modeling, they demonstrate the robustness and transferability of this reward subsystem across diverse datasets, model scales, and architectures. The findings reveal that value neurons are critical for reasoning performance, while dopamine neurons exhibit activation patterns that accurately reflect deviations from expected outcomes, thereby uncovering a biologically plausible, brain-like reward mechanism within LLMs.
📝 Abstract
In this paper, we identify a sparse reward subsystem within the hidden states of Large Language Models (LLMs), drawing an analogy to the biological reward subsystem in the human brain. We demonstrate that this subsystem contains value neurons that represent the model's internal expectation of state value, and through intervention experiments, we establish the importance of these neurons for reasoning. Our experiments reveal that these value neurons are robust across diverse datasets, model scales, and architectures; furthermore, they exhibit significant transferability across different datasets and models fine-tuned from the same base model. By examining cases where value predictions and actual rewards diverge, we identify dopamine neurons within the reward subsystem which encode reward prediction errors (RPE). These neurons exhibit high activation when the reward is higher than expected and low activation when the reward is lower than expected.