Training Superior Sparse Autoencoders for Instruct Models

📅 2025-06-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the poor reconstruction fidelity and low interpretability of sparse autoencoders (SAEs) trained on instruction-tuned large language models (e.g., Qwen2.5-7B-Instruct, Llama3.2-3B-Instruct), this paper proposes Finetuning-aligned Sequential Training (FAST). FAST introduces a fine-grained training paradigm that explicitly aligns SAE learning with the data distribution and activation patterns characteristic of instruction-tuned models, significantly improving both reconstruction accuracy and semantic interpretability. Furthermore, we discover that targeted intervention on activations of special tokens enables controllable, behavior-level model modulation—opening a novel mechanistic intervention pathway. Experiments demonstrate substantial gains: MSE drops to 0.6468 on Qwen2.5-7B-Instruct (an 87% reduction over baseline), and high-quality feature proportion reaches 21.1% on Llama3.2-3B-Instruct (more than doubling the baseline). The project releases 240 pretrained SAEs and full implementation code.

Technology Category

Application Category

📝 Abstract
As large language models (LLMs) grow in scale and capability, understanding their internal mechanisms becomes increasingly critical. Sparse autoencoders (SAEs) have emerged as a key tool in mechanistic interpretability, enabling the extraction of human-interpretable features from LLMs. However, existing SAE training methods are primarily designed for base models, resulting in reduced reconstruction quality and interpretability when applied to instruct models. To bridge this gap, we propose $underline{ extbf{F}}$inetuning-$underline{ extbf{a}}$ligned $underline{ extbf{S}}$equential $underline{ extbf{T}}$raining ($ extit{FAST}$), a novel training method specifically tailored for instruct models. $ extit{FAST}$ aligns the training process with the data distribution and activation patterns characteristic of instruct models, resulting in substantial improvements in both reconstruction and feature interpretability. On Qwen2.5-7B-Instruct, $ extit{FAST}$ achieves a mean squared error of 0.6468 in token reconstruction, significantly outperforming baseline methods with errors of 5.1985 and 1.5096. In feature interpretability, $ extit{FAST}$ yields a higher proportion of high-quality features, for Llama3.2-3B-Instruct, $21.1%$ scored in the top range, compared to $7.0%$ and $10.2%$ for $ extit{BT(P)}$ and $ extit{BT(F)}$. Surprisingly, we discover that intervening on the activations of special tokens via the SAEs leads to improvements in output quality, suggesting new opportunities for fine-grained control of model behavior. Code, data, and 240 trained SAEs are available at https://github.com/Geaming2002/FAST.
Problem

Research questions and friction points this paper is trying to address.

Improving sparse autoencoder training for instruct models
Enhancing reconstruction quality and feature interpretability in LLMs
Aligning SAE training with instruct model data distributions
Innovation

Methods, ideas, or system contributions that make the work stand out.

FAST method for instruct models
Aligns training with instruct patterns
Improves reconstruction and interpretability
🔎 Similar Papers
No similar papers found.
J
Jiaming Li
Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences; University of Chinese Academy of Sciences
Haoran Ye
Haoran Ye
AI PhD @ Peking University
AgentAI Safety and AlignmentAI PsychologyLearn to OptimizeEvolutionary Computation
Yukun Chen
Yukun Chen
Pieces Technologies Inc.
Natural Language Processing
X
Xinyue Li
L
Lei Zhang
Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences; University of Chinese Academy of Sciences
Hamid Alinejad-Rokny
Hamid Alinejad-Rokny
ARC DECRA & UNSW Scientia Fellow, Head of BioMedical Machine Learning Lab
BioMedical Machine LearningMachine Learning for HealthMedical Artificial IntelligenceLLMs
J
Jimmy Chih-Hsien Peng
National University of Singapore
Min Yang
Min Yang
Bytedance
Vision Language ModelComputer VisionVideo Understanding