Control Reinforcement Learning: Interpretable Token-Level Steering of LLMs via Sparse Autoencoder Features

📅 2026-02-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitation of existing sparse autoencoders, which can identify activated features but struggle to determine which feature interventions alter large language model outputs. To enable fine-grained and interpretable control, the authors propose a Control Reinforcement Learning framework that dynamically selects and intervenes on sparse autoencoder features at the token level via reinforcement learning, augmented with an adaptive feature masking mechanism. The method introduces novel analytical capabilities—including branch-point tracking, critic trajectory analysis, and cross-layer feature comparison—and demonstrates performance improvements across multiple benchmarks (MMLU, BBQ, GSM8K, HarmBench, and XSTest) on the Gemma 2 2B model. Additionally, it generates per-token intervention logs, substantially enhancing model interpretability and controllability.

Technology Category

Application Category

📝 Abstract
Sparse autoencoders (SAEs) decompose language model activations into interpretable features, but existing methods reveal only which features activate, not which change model outputs when amplified. We introduce Control Reinforcement Learning (CRL), which trains a policy to select SAE features for steering at each token, producing interpretable intervention logs: the learned policy identifies features that change model outputs when amplified. Adaptive Feature Masking encourages diverse feature discovery while preserving singlefeature interpretability. The framework yields new analysis capabilities: branch point tracking locates tokens where feature choice determines output correctness; critic trajectory analysis separates policy limitations from value estimation errors; layer-wise comparison reveals syntactic features in early layers and semantic features in later layers. On Gemma 2 2B across MMLU, BBQ, GSM8K, HarmBench, and XSTest, CRL achieves improvements while providing per-token intervention logs. These results establish learned feature steering as a mechanistic interpretability tool that complements static feature analysis with dynamic intervention probes
Problem

Research questions and friction points this paper is trying to address.

sparse autoencoders
interpretable features
language model steering
feature intervention
mechanistic interpretability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Control Reinforcement Learning
Sparse Autoencoders
Interpretable Steering
Token-Level Intervention
Mechanistic Interpretability
🔎 Similar Papers
No similar papers found.