🤖 AI Summary
Existing inference-time intervention methods suffer from three key limitations: reliance on fixed global steering vectors, neglect of token-level causal effects, and underutilization of logits gradients—especially under imbalanced multimodal input contributions. This paper proposes GrAInS, a gradient-attribution-based dynamic steering method that uniquely integrates contrastive gradient attribution with Integrated Gradients (IG) to yield interpretable, token-level attribution scores. Leveraging these scores, GrAInS constructs semantically guided positive and negative direction vectors, dynamically modulating hidden states and normalizing representation scales per Transformer layer. Crucially, it requires no parameter updates and unifies intervention across LLMs and VLMs. On Llama-3.1-8B, GrAInS improves TruthfulQA accuracy by 13.22%; on LLaVA-1.6-7B, it reduces hallucination rate on MMHal-Bench from 0.624 to 0.514; and on SPA-VL, it boosts alignment win-rate by 8.11%, all while preserving generation fluency and general capabilities.
📝 Abstract
Inference-time steering methods offer a lightweight alternative to fine-tuning large language models (LLMs) and vision-language models (VLMs) by modifying internal activations at test time without updating model weights. However, most existing approaches rely on fixed, global intervention vectors, overlook the causal influence of individual input tokens, and fail to leverage informative gradients from the model's logits, particularly in multimodal settings where visual and textual inputs contribute unevenly. To address these limitations, we introduce GrAInS, an inference-time steering approach that operates across both language-only and vision-language models and tasks. GrAInS uses contrastive, gradient-based attribution via Integrated Gradients to identify the top-k most influential tokens, both positively and negatively attributed based on their contribution to preferred versus dispreferred outputs. These tokens are then used to construct directional steering vectors that capture semantic shifts from undesirable to desirable behavior. During inference, GrAInS adjusts hidden activations at transformer layers guided by token-level attribution signals, and normalizes activations to preserve representational scale. This enables fine-grained, interpretable, and modular control over model behavior, without retraining or auxiliary supervision. Empirically, GrAInS consistently outperforms both fine-tuning and existing steering baselines: it achieves a 13.22% accuracy gain on TruthfulQA using Llama-3.1-8B, reduces hallucination rates on MMHal-Bench from 0.624 to 0.514 with LLaVA-1.6-7B, and improves alignment win rates on SPA-VL by 8.11%, all while preserving the model's fluency and general capabilities.