Neuron-Guided Interpretation of Code LLMs: Where, Why, and How?

📅 2025-12-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Code large language models (Code LLMs) lack neuron-level interpretability; existing NLP interpretability methods fail to accommodate programming languages’ syntactic rigidity, structural hierarchy, and semantic executability. Method: We first systematically uncover the coexistence of “language-specific neurons” and a “cross-lingual conceptual layer” in Code LLMs, revealing a hierarchical representation mechanism—syntax concentrated in lower layers and semantic abstraction emerging in middle layers. We propose a conceptual-layer-based framework comprising neuron selectivity analysis, layer-wise contribution attribution, and embedding extraction. Contribution/Results: We instantiate this framework into evaluation benchmarks for multilingual code generation, code clone detection, and cross-lingual code summarization. Based on it, we establish novel paradigms for fine-tuning, clone detection, and cross-lingual summarization transfer, achieving consistent performance gains across diverse tasks—empirically validating that conceptual-layer alignment effectively captures the semantic essence of code.

Technology Category

Application Category

📝 Abstract
Code language models excel on code intelligence tasks, yet their internal interpretability is underexplored. Existing neuron interpretability techniques from NLP are suboptimal for source code due to programming languages formal, hierarchical, and executable nature. We empirically investigate code LLMs at the neuron level, localizing language-specific neurons (selectively responsive to one language) and concept layers (feed-forward layers encoding language-agnostic code representations). We analyze Llama-3.1-8B and Qwen2.5-Coder-32B on multilingual inputs in C++, Java, Python, Go, and JavaScript, measuring neuron selectivity and layerwise contributions during generation. We find (1) neurons specialized for individual languages alongside a universal subset supporting general-purpose generation; and (2) lower layers mainly encode language-specific syntax, while middle layers capture semantic abstractions shared across languages, emerging as concept layers. We demonstrate utility on three tasks: neuron-guided fine-tuning for code generation, clone detection via concept-layer embeddings, and concept-layer-guided transfer for code summarization, each yielding consistent gains in multilingual settings.
Problem

Research questions and friction points this paper is trying to address.

Interpreting neuron-level mechanisms in code language models
Adapting NLP neuron interpretability to programming languages
Applying neuron insights to multilingual code generation tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Neuron-level analysis of code LLMs
Localization of language-specific neurons and concept layers
Neuron-guided fine-tuning and concept-layer embeddings for tasks
🔎 Similar Papers
No similar papers found.