🤖 AI Summary
Precise identification of behaviorally relevant attention heads during LLM inference remains challenging; existing methods often rely on superficial cues or heuristic strategies, resulting in poor controllability.
Method: We propose a parameter-free causal attribution framework that introduces a vector-quantized autoencoder (VQ-AE) to achieve interpretable disentanglement of attention head activation spaces. Behavioral relevance is defined at the head level via a binary discrimination criterion—alignment versus violation of target behavior—enabling importance-weighted attribution.
Contribution/Results: Our method significantly improves factual consistency control across 7 LLMs and 5 behavior-guided tasks. The identified attention heads exhibit strong cross-domain zero-shot generalization. By eliminating the need for fine-tuning and providing human-interpretable, robust behavioral intervention, our approach establishes a novel paradigm for controllable LLM reasoning.
📝 Abstract
Inference-time steering aims to alter the response characteristics of large language models (LLMs) without modifying their underlying parameters. A critical step in this process is the identification of internal modules within LLMs that are associated with the target behavior. However, current approaches to module selection often depend on superficial cues or ad-hoc heuristics, which can result in suboptimal or unintended outcomes. In this work, we propose a principled causal-attribution framework for identifying behavior-relevant attention heads in transformers. For each head, we train a vector-quantized autoencoder (VQ-AE) on its attention activations, partitioning the latent space into behavior-relevant and behavior-irrelevant subspaces, each quantized with a shared learnable codebook. We assess the behavioral relevance of each head by quantifying the separability of VQ-AE encodings for behavior-aligned versus behavior-violating responses using a binary classification metric. This yields a behavioral relevance score that reflects each head discriminative capacity with respect to the target behavior, guiding both selection and importance weighting. Experiments on seven LLMs from two model families and five behavioral steering datasets demonstrate that our method enables more accurate inference-time interventions, achieving superior performance on the truthfulness-steering task. Furthermore, the heads selected by our approach exhibit strong zero-shot generalization in cross-domain truthfulness-steering scenarios.