Context Copying Modulation: The Role of Entropy Neurons in Managing Parametric and Contextual Knowledge Conflicts

📅 2025-09-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit unstable behavior when parametric knowledge conflicts with contextual information, and lack a unified account of output uncertainty. This work identifies—through neuron-level activation analysis and systematic ablation studies across multiple mainstream autoregressive Transformer models—a novel class of hidden-layer neurons, termed “entropy neurons,” characterized by high activation entropy. These neurons are shown to play a critical regulatory role in suppressing context copying and resolving knowledge conflicts. Results demonstrate that entropy neurons actively reduce output entropy and stabilize token ranking; their functional ablation leads to significant increases in context copying and systematic shifts in the generation distribution. This discovery provides an interpretable neural basis for understanding how LLMs select among competing knowledge sources, and opens a new avenue for behavior control via targeted modulation of neuron functionality.

Technology Category

Application Category

📝 Abstract
The behavior of Large Language Models (LLMs) when facing contextual information that conflicts with their internal parametric knowledge is inconsistent, with no generally accepted explanation for the expected outcome distribution. Recent work has identified in autoregressive transformer models a class of neurons -- called entropy neurons -- that produce a significant effect on the model output entropy while having an overall moderate impact on the ranking of the predicted tokens. In this paper, we investigate the preliminary claim that these neurons are involved in inhibiting context copying behavior in transformers by looking at their role in resolving conflicts between contextual and parametric information. We show that entropy neurons are responsible for suppressing context copying across a range of LLMs, and that ablating them leads to a significant change in the generation process. These results enhance our understanding of the internal dynamics of LLMs when handling conflicting information.
Problem

Research questions and friction points this paper is trying to address.

Investigating entropy neurons' role in resolving contextual-parametric knowledge conflicts
Examining how entropy neurons suppress context copying behavior in transformers
Understanding LLM internal dynamics when handling conflicting information sources
Innovation

Methods, ideas, or system contributions that make the work stand out.

Entropy neurons modulate context copying behavior
Ablating entropy neurons alters generation process
Neurons resolve parametric and contextual knowledge conflicts
🔎 Similar Papers
No similar papers found.