🤖 AI Summary
Existing methods for semantic concept manipulation in large language models (LLMs) suffer from poor robustness and inconsistent performance. This work proposes an attention-guided feature learning framework that automatically localizes and hierarchically filters concept-relevant token embeddings to effectively address the heterogeneity of conceptual features in activations and identify the network layers most critical for manipulation. For the first time, this approach enables adaptive concept manipulation across diverse architectures and model scales. Evaluated on a benchmark encompassing 512 semantic concepts, the method nearly doubles the number of successfully manipulated concepts—validated up to 70B-parameter models—significantly improving accuracy, robustness, and generalization. Furthermore, it reveals the hierarchical organization of semantic concepts within LLMs.
📝 Abstract
Steering, or direct manipulation of internal activations to guide LLM responses toward specific semantic concepts, is emerging as a promising avenue for both understanding how semantic concepts are stored within LLMs and advancing LLM capabilities. Yet, existing steering methods are remarkably brittle, with seemingly non-steerable concepts becoming completely steerable based on subtle algorithmic choices in how concept-related features are extracted. In this work, we introduce an attention-guided steering framework that overcomes three core challenges associated with steering: (1) automatic selection of relevant token embeddings for extracting concept-related features; (2) accounting for heterogeneity of concept-related features across LLM activations; and (3) identification of layers most relevant for steering. Across a steering benchmark of 512 semantic concepts, our framework substantially improved steering over previous state-of-the-art (nearly doubling the number of successfully steered concepts) across model architectures and sizes (up to 70 billion parameter models). Furthermore, we use our framework to shed light on the distribution of concept-specific features across LLM layers. Overall, our framework opens further avenues for developing efficient, highly-scalable fine-tuning algorithms for industry-scale LLMs.