Falsifying Sparse Autoencoder Reasoning Features in Language Models

📅 2026-01-09
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether sparse autoencoders (SAEs) can identify genuine reasoning-related features in large language models (LLMs), rather than merely capturing superficial linguistic correlations. To this end, we propose a falsification-oriented evaluation framework that integrates LLM-guided counterexample generation, causal interventions, and contrastive activation analysis to systematically assess the causal efficacy of SAE-derived features. Our experiments reveal that the vast majority of SAE features are triggered by only a few tokens and fail under falsification tests; moreover, manipulating these features not only fails to improve reasoning performance but often degrades it. These findings suggest that such features predominantly reflect statistical language patterns rather than authentic reasoning mechanisms. This work establishes a rigorous causal validation paradigm for interpretability research in LLMs.

Technology Category

Application Category

📝 Abstract
We study how reliably sparse autoencoders (SAEs) support claims about reasoning-related internal features in large language models. We first give a stylized analysis showing that sparsity-regularized decoding can preferentially retain stable low-dimensional correlates while suppressing high-dimensional within-behavior variation, motivating the possibility that contrastively selected"reasoning"features may concentrate on cue-like structure when such cues are coupled with reasoning traces. Building on this perspective, we propose a falsification-based evaluation framework that combines causal token injection with LLM-guided counterexample construction. Across 22 configurations spanning multiple model families, layers, and reasoning datasets, we find that many contrastively selected candidates are highly sensitive to token-level interventions, with 45%-90% activating after injecting only a few associated tokens into non-reasoning text. For the remaining context-dependent candidates, LLM-guided falsification produces targeted non-reasoning inputs that trigger activation and meaning-preserving paraphrases of top-activating reasoning traces that suppress it. A small steering study yields minimal changes on the evaluated benchmarks. Overall, our results suggest that, in the settings we study, sparse decompositions can favor low-dimensional correlates that co-occur with reasoning, underscoring the need for falsification when attributing high-level behaviors to individual SAE features. Code is available at https://github.com/GeorgeMLP/reasoning-probing.
Problem

Research questions and friction points this paper is trying to address.

sparse autoencoders
reasoning features
large language models
linguistic correlates
feature activation
Innovation

Methods, ideas, or system contributions that make the work stand out.

sparse autoencoders
reasoning features
causal intervention
falsification framework
linguistic artifacts
🔎 Similar Papers
No similar papers found.