🤖 AI Summary
The “black-box” nature of large language models (LLMs) undermines their trustworthiness and debuggability. To address this, we propose a neuron-level decomposition method grounded in sparse autoencoders (SAEs) and dictionary learning, which systematically identifies and localizes monosemantic features corresponding to specific semantic misinterpretations within LLMs—enabling automated prompt reconstruction and optimization. Unlike prior interpretability approaches focused solely on attribution or visualization—often failing to inform actionable improvements—our method directly links feature disentanglement to measurable task performance gains. Evaluated on mathematical reasoning and metaphor detection, it yields substantial accuracy improvements (+5.2%–8.7%) while delivering verifiable, intervenable semantic explanations. Our core contribution is the establishment of a misinterpretation-driven prompt optimization paradigm that synergistically enhances both model interpretability and empirical performance.
📝 Abstract
Large Language Models (LLMs) are traditionally viewed as black-box algorithms, therefore reducing trustworthiness and obscuring potential approaches to increasing performance on downstream tasks. In this work, we apply an effective LLM decomposition method using a dictionary-learning approach with sparse autoencoders. This helps extract monosemantic features from polysemantic LLM neurons. Remarkably, our work identifies model-internal misunderstanding, allowing the automatic reformulation of the prompts with additional annotations to improve the interpretation by LLMs. Moreover, this approach demonstrates a significant performance improvement in downstream tasks, such as mathematical reasoning and metaphor detection.