🤖 AI Summary
Existing Transformer interpretability research predominantly focuses on MLP neurons and simple factual concepts, neglecting attention mechanisms and lacking a unified analytical framework for complex, abstract concepts. Method: We propose Concept-Agnostic Attention Module Discovery and Scalar Intervention (SAMD/SAMI), the first method to directly map arbitrary complex concepts—e.g., ethical judgments or reasoning steps—to specific attention heads and modulate their influence via a single scalar parameter. Our approach leverages concept vectorization and cosine similarity ranking, enabling consistent cross-modal and cross-task analysis. Contribution/Results: SAMD/SAMI demonstrates robust module localization stability before and after LLM post-training; achieves a 72.7% reduction in jailbreaking success rate on HarmBench and a 1.6% absolute accuracy gain on GSM8K; and generalizes successfully to Vision Transformers, where it effectively suppresses ImageNet classification accuracy—validating both its broad applicability and precise controllability.
📝 Abstract
Transformers have achieved state-of-the-art performance across language and vision tasks. This success drives the imperative to interpret their internal mechanisms with the dual goals of enhancing performance and improving behavioral control. Attribution methods help advance interpretability by assigning model outputs associated with a target concept to specific model components. Current attribution research primarily studies multi-layer perceptron neurons and addresses relatively simple concepts such as factual associations (e.g., Paris is located in France). This focus tends to overlook the impact of the attention mechanism and lacks a unified approach for analyzing more complex concepts. To fill these gaps, we introduce Scalable Attention Module Discovery (SAMD), a concept-agnostic method for mapping arbitrary, complex concepts to specific attention heads of general transformer models. We accomplish this by representing each concept as a vector, calculating its cosine similarity with each attention head, and selecting the TopK-scoring heads to construct the concept-associated attention module. We then propose Scalar Attention Module Intervention (SAMI), a simple strategy to diminish or amplify the effects of a concept by adjusting the attention module using only a single scalar parameter. Empirically, we demonstrate SAMD on concepts of varying complexity, and visualize the locations of their corresponding modules. Our results demonstrate that module locations remain stable before and after LLM post-training, and confirm prior work on the mechanics of LLM multilingualism. Through SAMI, we facilitate jailbreaking on HarmBench (+72.7%) by diminishing"safety"and improve performance on the GSM8K benchmark (+1.6%) by amplifying"reasoning". Lastly, we highlight the domain-agnostic nature of our approach by suppressing the image classification accuracy of vision transformers on ImageNet.