🤖 AI Summary
This study systematically evaluates diverse control methods for output steering and concept detection in large language models (LLMs), introducing AxBench—the first large-scale benchmark for this purpose. On Gemma-2, we compare prompt engineering, full-parameter fine-tuning, linear probing, manual tomography, sparse autoencoders (SAEs), and weakly supervised representation learning. We propose Rank-1 Representation Fine-Tuning (ReFT-r1), a method balancing controllability and interpretability. Empirical results show prompt-based steering achieves the highest output control performance, while Differential Mean (DiffMean) excels in concept detection; SAEs underperform simple baselines significantly. Key contributions include: (1) releasing the open AxBench benchmark; (2) open-sourcing ReFT-r1 and the DiffMean feature dictionary; and (3) empirically characterizing the performance limits and applicability domains of representation intervention techniques—establishing an evidence-based foundation and practical toolkit for LLM controllability research.
📝 Abstract
Fine-grained steering of language model outputs is essential for safety and reliability. Prompting and finetuning are widely used to achieve these goals, but interpretability researchers have proposed a variety of representation-based techniques as well, including sparse autoencoders (SAEs), linear artificial tomography, supervised steering vectors, linear probes, and representation finetuning. At present, there is no benchmark for making direct comparisons between these proposals. Therefore, we introduce AxBench, a large-scale benchmark for steering and concept detection, and report experiments on Gemma-2-2B and 9B. For steering, we find that prompting outperforms all existing methods, followed by finetuning. For concept detection, representation-based methods such as difference-in-means, perform the best. On both evaluations, SAEs are not competitive. We introduce a novel weakly-supervised representational method (Rank-1 Representation Finetuning; ReFT-r1), which is competitive on both tasks while providing the interpretability advantages that prompting lacks. Along with AxBench, we train and publicly release SAE-scale feature dictionaries for ReFT-r1 and DiffMean.