LSRIF: Logic-Structured Reinforcement Learning for Instruction Following

📅 2026-01-10
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Real-world instructions often contain logical structures such as sequential dependencies and conditional branches, yet existing approaches neglect these dependencies, resulting in noisy training signals and suboptimal performance. To address this, this work introduces LSRInstruct, a dataset explicitly encoding parallel, sequential, and conditional constraints, and proposes the first structure-aware reward mechanism in reinforcement learning for instruction following. The method employs average aggregation for parallel structures, propagates failure penalties across sequential steps, and applies selective rewards for conditional branches. By explicitly modeling instruction logic, the approach significantly enhances model performance on both in-domain and cross-domain instruction-following tasks as well as general reasoning benchmarks. Attention analysis further reveals that training with logical structures refines attention parameters, strengthening token-level focus on constraints and logical operators.

Technology Category

Application Category

📝 Abstract
Instruction-following is critical for large language models, but real-world instructions often contain logical structures such as sequential dependencies and conditional branching. Existing methods typically construct datasets with parallel constraints and optimize average rewards, ignoring logical dependencies and yielding noisy signals. We propose a logic-structured training framework LSRIF that explicitly models instruction logic. We first construct a dataset LSRInstruct with constraint structures such as parallel, sequential, and conditional types, and then design structure-aware rewarding method LSRIF including average aggregation for parallel structures, failure-penalty propagation for sequential structures, and selective rewards for conditional branches. Experiments show LSRIF brings significant improvements in instruction-following (in-domain and out-of-domain) and general reasoning. Analysis reveals that learning with explicit logic structures brings parameter updates in attention layers and sharpens token-level attention to constraints and logical operators.
Problem

Research questions and friction points this paper is trying to address.

instruction-following
logical structures
sequential dependencies
conditional branching
reward signals
Innovation

Methods, ideas, or system contributions that make the work stand out.

logic-structured reinforcement learning
instruction following
structured reward design
constraint-aware attention
LSRIF
🔎 Similar Papers
No similar papers found.
Q
Qingyu Ren
Shanghai Key Laboratory of Data Science, College of Computer Science and Artificial Intelligence, Fudan University
Qianyu He
Qianyu He
Fudan University
Large Language ModelReasoningInstruction FollowingCreative Generation
J
Jingwen Chang
Shanghai Key Laboratory of Data Science, College of Computer Science and Artificial Intelligence, Fudan University
Jie Zeng
Jie Zeng
Fudan University
Large Language ModelNatural Language Processing
Jiaqing Liang
Jiaqing Liang
Fudan University
knowledge graphdeep learning
Y
Yanghua Xiao
Shanghai Key Laboratory of Data Science, College of Computer Science and Artificial Intelligence, Fudan University
H
Han Xia
Ant Group
Z
Zeye Sun
Ant Group
F
Fei Yu
Ant Group