HIPO: Instruction Hierarchy via Constrained Reinforcement Learning

📅 2026-03-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of balancing system compliance and user utility in large language models operating under multiple, potentially conflicting instructions. The authors formulate hierarchical instruction following as a constrained Markov decision process, elevating system prompts from contextual inputs to hard algorithmic constraints for the first time. By employing primal-dual safe reinforcement learning, the approach maximizes user utility while strictly adhering to system-level constraints and intrinsically modeling the asymmetric priorities among instructions. Experimental results across mainstream models—including Qwen, Phi, and Llama—demonstrate significant improvements in both system compliance and user satisfaction. Attention analysis further confirms that the model autonomously enhances its focus on long-range system instructions, validating the effectiveness of the proposed constraint integration mechanism.

Technology Category

Application Category

📝 Abstract
Hierarchical Instruction Following (HIF) refers to the problem of prompting large language models with a priority-ordered stack of instructions. Standard methods like RLHF and DPO typically fail in this problem since they mainly optimize for a single objective, failing to explicitly enforce system prompt compliance. Meanwhile, supervised fine-tuning relies on mimicking filtered, compliant data, which fails to establish the priority asymmetry at the algorithmic level. In this paper, we introduce \textsc{HIPO}, a novel alignment framework that formulates HIF as a Constrained Markov Decision Process. \textsc{HIPO} elevates system prompts from mere input context to strict algorithmic boundaries. Using a primal-dual safe reinforcement learning approach, the algorithm dynamically enforces system prompt compliance as an explicit constraint, maximizing user utility strictly within this feasible region. Extensive evaluations across diverse model architectures (e.g., Qwen, Phi, Llama) demonstrate that \textsc{HIPO} significantly improves both system compliance and user utility. Furthermore, mechanistic analysis reveals that this constrained optimization autonomously drives the model to shift its attention toward long-range system tokens, providing a principled foundation for reliable LLM deployment in complex workflows.
Problem

Research questions and friction points this paper is trying to address.

Hierarchical Instruction Following
System Prompt Compliance
Constrained Reinforcement Learning
Large Language Models
Instruction Priority
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchical Instruction Following
Constrained Reinforcement Learning
System Prompt Compliance
Primal-Dual Optimization
LLM Alignment
🔎 Similar Papers
No similar papers found.