Improving Instruction-Following in Language Models through Activation Steering

📅 2024-10-15
🏛️ arXiv.org
📈 Citations: 11
Influential: 1
📄 PDF
🤖 AI Summary
This work addresses the limited capability of large language models (LLMs) to adhere to fine-grained instruction constraints—such as formatting, length, and keyword requirements—and their poor generalization across zero-shot or cross-model settings. To this end, we propose activation steering: a lightweight, inference-time intervention that computes layer-wise neural activation differences between instruction-present and instruction-absent conditions, yielding interpretable, transferable, and composable instruction vectors. Crucially, no model fine-tuning is required. Our key contribution is the first formulation of instructions as cross-model-transferable activation-difference vectors, enabling vector composition (e.g.,叠加 multiple constraints) and foundation-model enhancement. Extensive evaluation across four mainstream LLMs demonstrates substantial improvements in instruction-following accuracy. The method supports constraint-aware generation without explicit instructions, concurrent multi-constraint control, and knowledge transfer from instruction-tuned models to base models.

Technology Category

Application Category

📝 Abstract
The ability to follow instructions is crucial for numerous real-world applications of language models. In pursuit of deeper insights and more powerful capabilities, we derive instruction-specific vector representations from language models and use them to steer models accordingly. These vectors are computed as the difference in activations between inputs with and without instructions, enabling a modular approach to activation steering. We demonstrate how this method can enhance model adherence to constraints such as output format, length, and word inclusion, providing inference-time control over instruction following. Our experiments across four models demonstrate how we can use the activation vectors to guide models to follow constraints even without explicit instructions and to enhance performance when instructions are present. Additionally, we explore the compositionality of activation steering, successfully applying multiple instructions simultaneously. Finally, we demonstrate that steering vectors computed on instruction-tuned models can transfer to improve base models. Our findings demonstrate that activation steering offers a practical and scalable approach for fine-grained control in language generation.
Problem

Research questions and friction points this paper is trying to address.

Enhancing instruction-following in language models via activation steering
Controlling output format, length, and word inclusion constraints
Transferring steering vectors from tuned to base models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Derive instruction-specific vectors from model activations
Steer models modularly via activation difference vectors
Transfer steering vectors to enhance base models
🔎 Similar Papers
No similar papers found.