🤖 AI Summary
Real-world instructions often contain logical structures such as sequential dependencies and conditional branches, yet existing approaches neglect these dependencies, resulting in noisy training signals and suboptimal performance. To address this, this work introduces LSRInstruct, a dataset explicitly encoding parallel, sequential, and conditional constraints, and proposes the first structure-aware reward mechanism in reinforcement learning for instruction following. The method employs average aggregation for parallel structures, propagates failure penalties across sequential steps, and applies selective rewards for conditional branches. By explicitly modeling instruction logic, the approach significantly enhances model performance on both in-domain and cross-domain instruction-following tasks as well as general reasoning benchmarks. Attention analysis further reveals that training with logical structures refines attention parameters, strengthening token-level focus on constraints and logical operators.
📝 Abstract
Instruction-following is critical for large language models, but real-world instructions often contain logical structures such as sequential dependencies and conditional branching. Existing methods typically construct datasets with parallel constraints and optimize average rewards, ignoring logical dependencies and yielding noisy signals. We propose a logic-structured training framework LSRIF that explicitly models instruction logic. We first construct a dataset LSRInstruct with constraint structures such as parallel, sequential, and conditional types, and then design structure-aware rewarding method LSRIF including average aggregation for parallel structures, failure-penalty propagation for sequential structures, and selective rewards for conditional branches. Experiments show LSRIF brings significant improvements in instruction-following (in-domain and out-of-domain) and general reasoning. Analysis reveals that learning with explicit logic structures brings parameter updates in attention layers and sharpens token-level attention to constraints and logical operators.