Do LLMs "know" internally when they follow instructions?

📅 2024-10-18
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates whether large language models (LLMs) encode predictive signals of instruction-following success within their internal representations. To address this, the authors propose and empirically validate a generalizable “instruction-following direction”—a linear subspace in the embedding space that reliably discriminates response compliance and supports both diagnostic probing and targeted representation intervention. Their method leverages linear probes to identify this direction and applies controlled interventions to modulate adherence. Results show the direction is strongly conditioned on prompt phrasing rather than task difficulty; intervening along it significantly improves instruction-following rates on unseen tasks without degrading response quality. However, generalization remains limited across novel instruction types, underscoring the critical role of prompt engineering. The core contribution is the first empirical identification, quantification, and intervention-based validation of a transferable, manipulable, structured representation for instruction following in LLMs.

Technology Category

Application Category

📝 Abstract
Instruction-following is crucial for building AI agents with large language models (LLMs), as these models must adhere strictly to user-provided constraints and guidelines. However, LLMs often fail to follow even simple and clear instructions. To improve instruction-following behavior and prevent undesirable outputs, a deeper understanding of how LLMs' internal states relate to these outcomes is required. In this work, we investigate whether LLMs encode information in their representations that correlate with instruction-following success - a property we term knowing internally. Our analysis identifies a direction in the input embedding space, termed the instruction-following dimension, that predicts whether a response will comply with a given instruction. We find that this dimension generalizes well across unseen tasks but not across unseen instruction types. We demonstrate that modifying representations along this dimension improves instruction-following success rates compared to random changes, without compromising response quality. Further investigation reveals that this dimension is more closely related to the phrasing of prompts rather than the inherent difficulty of the task or instructions. This work provides insight into the internal workings of LLMs' instruction-following, paving the way for reliable LLM agents.
Problem

Research questions and friction points this paper is trying to address.

Investigates if LLMs internally encode instruction-following success
Identifies a predictive dimension for instruction compliance in LLMs
Explores improving instruction-following by modifying model representations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Identifies instruction-following dimension in embeddings
Modifies representations to improve compliance rates
Links dimension to prompt phrasing not task difficulty
🔎 Similar Papers
No similar papers found.