Auxiliary Metrics Help Decoding Skill Neurons in the Wild

📅 2025-11-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the opacity of internal mechanisms in large language models (LLMs) and the difficulty of interpreting multi-skill interactions. We propose a lightweight, annotation-free neuron identification method grounded in a soft-prompt training framework. By jointly leveraging external labels and model confidence scores as auxiliary signals, our approach models the association between neuron activation and task-specific skills—enabling interpretable localization of skill-selective neurons for complex tasks such as arithmetic reasoning, natural language inference, and open-ended generation. A key contribution is the discovery of previously unknown implicit arithmetic reasoning shortcuts within LLMs. Evaluated on benchmarks including BigBench, our method effectively identifies neurons critical to both known capabilities and novel reasoning pathways. This advances model interpretability and enables precise, controllable model editing—establishing a new paradigm for mechanistic analysis of LLMs.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) exhibit remarkable capabilities across a wide range of tasks, yet their internal mechanisms remain largely opaque. In this paper, we introduce a simple, lightweight, and broadly applicable method with a focus on isolating neurons that encode specific skills. Building upon prior work that identified"skill neurons"via soft prompt training on classification tasks, our approach extends the analysis to complex scenarios involving multiple skills. We correlate neuron activations with auxiliary metrics -- such as external labels and the model's own confidence score -- thereby uncovering interpretable and task-specific behaviors without the need for manual token aggregation. We empirically validate our method on tasks spanning open-ended text generation and natural language inference, demonstrating its ability to detect neurons that not only drive known skills but also reveal previously unidentified shortcuts in arithmetic reasoning on BigBench.
Problem

Research questions and friction points this paper is trying to address.

Identify skill-specific neurons in large language models
Extend neuron analysis to complex multi-skill scenarios
Uncover interpretable behaviors using auxiliary metrics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Correlate neuron activations with auxiliary metrics
Extend skill neuron analysis to complex scenarios
Detect neurons driving skills and revealing shortcuts