Identifying Good and Bad Neurons for Task-Level Controllable LLMs

📅 2026-01-08
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing methods struggle to accurately identify neurons in large language models that either suppress a task or are misattributed due to fortuitously correct answers, thereby limiting our understanding and controllability of model mechanisms. This work proposes NeuronLLM, a framework that, for the first time, introduces a functional antagonism model of “good” versus “bad” neurons at the task level. By leveraging contrastive learning, NeuronLLM jointly identifies neurons that promote or inhibit specific tasks, while an augmented question set mitigates attribution bias caused by incidental correctness. Evaluated across multiple architectures and scales of large language models on four NLP tasks, the method significantly outperforms existing approaches and reveals novel insights into the internal functional organization of these models.

Technology Category

Application Category

📝 Abstract
Large Language Models have demonstrated remarkable capabilities on multiple-choice question answering benchmarks, but the complex mechanisms underlying their large-scale neurons remain opaque, posing significant challenges for understanding and steering LLMs. While recent studies made progress on identifying responsible neurons for certain abilities, these ability-specific methods are infeasible for task-focused scenarios requiring coordinated use of multiple abilities. Moreover, these approaches focus only on supportive neurons that correlate positively with task completion, while neglecting neurons with other roles-such as inhibitive roles-and misled neuron attribution due to fortuitous behaviors in LLMs (i.e., correctly answer the questions by chance rather than genuine understanding). To address these challenges, we propose NeuronLLM, a novel task-level LLM understanding framework that adopts the biological principle of functional antagonism for LLM neuron identification. The key insight is that task performance is jointly determined by neurons with two opposing roles: good neurons that facilitate task completion and bad neurons that inhibit it. NeuronLLM achieves a holistic modeling of neurons via contrastive learning of good and bad neurons, while leveraging augmented question sets to mitigate the fortuitous behaviors in LLMs. Comprehensive experiments on LLMs of different sizes and families show the superiority of NeuronLLM over existing methods in four NLP tasks, providing new insights into LLM functional organization.
Problem

Research questions and friction points this paper is trying to address.

neuron identification
task-level controllability
functional antagonism
fortuitous behavior
large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

functional antagonism
good and bad neurons
contrastive learning
task-level controllability
neuron attribution
🔎 Similar Papers
No similar papers found.