🤖 AI Summary
This work addresses the challenge in large language model (LLM) steering via sparse autoencoders (SAEs): identifying truly output-relevant, interpretable features within the model’s latent space. We propose a novel input/output dual-type feature discrimination mechanism—formally defining and separating *input-type* features (encoding input information) from *output-type* features (causally driving model outputs)—and empirically observe their near mutual exclusivity. Building on this, we design an unsupervised dual-scoring framework that prioritizes features with high output scores. Experiments demonstrate that filtering low-output-score features—without any labeled data—improves SAE-based steering performance by 2–3×, matching the efficacy of supervised attribution methods. Our approach significantly advances practical, unsupervised feature attribution and controllable generation in LLMs.
📝 Abstract
Sparse Autoencoders (SAEs) have been proposed as an unsupervised approach to learn a decomposition of a model's latent space. This enables useful applications such as steering - influencing the output of a model towards a desired concept - without requiring labeled data. Current methods identify SAE features to steer by analyzing the input tokens that activate them. However, recent work has highlighted that activations alone do not fully describe the effect of a feature on the model's output. In this work, we draw a distinction between two types of features: input features, which mainly capture patterns in the model's input, and output features, which have a human-understandable effect on the model's output. We propose input and output scores to characterize and locate these types of features, and show that high values for both scores rarely co-occur in the same features. These findings have practical implications: after filtering out features with low output scores, we obtain 2-3x improvements when steering with SAEs, making them competitive with supervised methods.