🤖 AI Summary
Large language models (LLMs) exhibit unreliable behavior control in open-ended generation. Method: This paper proposes a supervised steering approach grounded in a sparse, interpretable representation space. It uniquely integrates sparse autoencoders (SAEs) with task-relevant latent subspace constraints to achieve semantic attribute disentanglement and precise low-dimensional intervention. SAEs extract highly interpretable sparse features; a linear classifier jointly optimized with subspace constraints learns task-specific steering vectors. Contribution/Results: The method significantly improves steering success rates across diverse tasks—including sentiment, factual consistency, and political bias—while inducing minimal degradation in generation quality. Experiments demonstrate that effective steering is achieved using only ~0.1% of the latent dimensions, balancing high success rate, strong interpretability, and low intervention overhead.
📝 Abstract
Large language models (LLMs) have demonstrated impressive capabilities in natural language understanding and generation, but controlling their behavior reliably remains challenging, especially in open-ended generation settings. This paper introduces a novel supervised steering approach that operates in sparse, interpretable representation spaces. We employ sparse autoencoders (SAEs)to obtain sparse latent representations that aim to disentangle semantic attributes from model activations. Then we train linear classifiers to identify a small subspace of task-relevant dimensions in latent representations. Finally, we learn supervised steering vectors constrained to this subspace, optimized to align with target behaviors. Experiments across sentiment, truthfulness, and politics polarity steering tasks with multiple LLMs demonstrate that our supervised steering vectors achieve higher success rates with minimal degradation in generation quality compared to existing methods. Further analysis reveals that a notably small subspace is sufficient for effective steering, enabling more targeted and interpretable interventions.