Exploring Task Performance with Interpretable Models via Sparse Auto-Encoders

📅 2025-07-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The “black-box” nature of large language models (LLMs) undermines their trustworthiness and debuggability. To address this, we propose a neuron-level decomposition method grounded in sparse autoencoders (SAEs) and dictionary learning, which systematically identifies and localizes monosemantic features corresponding to specific semantic misinterpretations within LLMs—enabling automated prompt reconstruction and optimization. Unlike prior interpretability approaches focused solely on attribution or visualization—often failing to inform actionable improvements—our method directly links feature disentanglement to measurable task performance gains. Evaluated on mathematical reasoning and metaphor detection, it yields substantial accuracy improvements (+5.2%–8.7%) while delivering verifiable, intervenable semantic explanations. Our core contribution is the establishment of a misinterpretation-driven prompt optimization paradigm that synergistically enhances both model interpretability and empirical performance.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are traditionally viewed as black-box algorithms, therefore reducing trustworthiness and obscuring potential approaches to increasing performance on downstream tasks. In this work, we apply an effective LLM decomposition method using a dictionary-learning approach with sparse autoencoders. This helps extract monosemantic features from polysemantic LLM neurons. Remarkably, our work identifies model-internal misunderstanding, allowing the automatic reformulation of the prompts with additional annotations to improve the interpretation by LLMs. Moreover, this approach demonstrates a significant performance improvement in downstream tasks, such as mathematical reasoning and metaphor detection.
Problem

Research questions and friction points this paper is trying to address.

Interpret black-box LLMs using sparse autoencoders
Extract monosemantic features from polysemantic neurons
Improve downstream task performance via prompt reformulation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using sparse autoencoders for LLM decomposition
Extracting monosemantic features from polysemantic neurons
Automatically reformulating prompts with annotations
🔎 Similar Papers
No similar papers found.