🤖 AI Summary
This paper addresses the degradation of controllability and reliability in large language models (LLMs) caused by conflicts among multi-source instructions—originating from developers, users, and external tools. To resolve this, we propose Instruction Hierarchy (IH) modeling, a novel paradigm that reframes instruction priority resolution as a verifiable logical reasoning task: the model explicitly infers the logical relationship between user-provided instructions and higher-priority system instructions before generating responses. Leveraging our curated VerIH dataset, we apply lightweight reinforcement learning fine-tuning to ensure robust execution of high-priority instructions under both alignment and conflict scenarios. This work is the first to cast instruction priority learning as structured reasoning training. Empirical results demonstrate significant improvements in instruction following and hierarchical reasoning capabilities, alongside strong generalization in security evaluations—including jailbreak resistance and prompt injection robustness.
📝 Abstract
As large language model (LLM) based systems take on high-stakes roles in real-world decision-making, they must reconcile competing instructions from multiple sources (e.g., model developers, users, and tools) within a single prompt context. Thus, enforcing an instruction hierarchy (IH) in LLMs, where higher-level directives override lower-priority requests, is critical for the reliability and controllability of LLMs. In this work, we reframe instruction hierarchy resolution as a reasoning task. Specifically, the model must first"think"about the relationship between a given user prompt and higher-priority (system) instructions before generating a response. To enable this capability via training, we construct VerIH, an instruction hierarchy dataset of constraint-following tasks with verifiable answers. This dataset comprises both aligned and conflicting system-user instructions. We show that lightweight reinforcement learning with VerIH effectively transfers general reasoning capabilities of models to instruction prioritization. Our finetuned models achieve consistent improvements on instruction following and instruction hierarchy benchmarks. This reasoning ability also generalizes to safety-critical settings beyond the training distribution. By treating safety issues as resolving conflicts between adversarial user inputs and predefined higher-priority policies, our trained model enhances robustness against jailbreak and prompt injection attacks. These results demonstrate that reasoning over instruction hierarchies provides a practical path to reliable LLMs, where updates to system prompts yield controllable and robust changes in model behavior.