Light-IF: Endowing LLMs with Generalizable Reasoning via Preview and Self-Checking for Complex Instruction Following

📅 2025-08-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) often exhibit unstable performance in complex instruction following due to *reasoning laziness*—a tendency to prematurely terminate or oversimplify reasoning chains. To address this, we propose Light-IF, the first framework to systematically model *dynamic preview* and *self-checking* mechanisms. Light-IF jointly optimizes chain-of-thought rigor and generalization via entropy-preserving supervised fine-tuning (Entropy-SFT) and token-level entropy-adaptive reinforcement learning (TEA-RL). High-quality training data is constructed through complex constrained instruction generation followed by rigorous filtering, further enhanced by rejection sampling. Experiments across multiple model scales demonstrate substantial improvements in instruction-following fidelity. Notably, the 32B Light-IF variant outperforms larger models—including DeepSeek-R1 and Doubao-1.6—on benchmark tasks, empirically validating that dynamic self-checking effectively mitigates reasoning laziness.

Technology Category

Application Category

📝 Abstract
While advancements in the reasoning abilities of LLMs have significantly enhanced their performance in solving mathematical problems, coding tasks, and general puzzles, their effectiveness in accurately adhering to instructions remains inconsistent, particularly with more complex directives. Our investigation identifies lazy reasoning during the thinking stage as the primary factor contributing to poor instruction adherence. To mitigate this issue, we propose a comprehensive framework designed to enable rigorous reasoning processes involving preview and self-checking, essential for satisfying strict instruction constraints. Specifically, we first generate instructions with complex constraints and apply a filtering process to obtain valid prompts, resulting in three distinct prompt datasets categorized as hard, easy, and pass. Then, we employ rejection sampling on the pass prompts to curate a small yet high-quality dataset, enabling a cold-start initialization of the model and facilitating its adaptation to effective reasoning patterns. Subsequently, we employ an entropy-preserving supervised fine-tuning (Entropy-SFT) strategy coupled with token-wise entropy-adaptive (TEA-RL) reinforcement learning guided by rule-based dense rewards. This approach encourages the model to transform its reasoning mechanism, ultimately fostering generalizable reasoning abilities that encompass preview and self-checking. Extensive experiments conducted on instruction-following benchmarks demonstrate remarkable performance improvements across various model scales. Notably, our Light-IF-32B model surpasses both larger open-source models such as DeepSeek-R1 and closed-source models like Doubao-1.6.
Problem

Research questions and friction points this paper is trying to address.

Improving LLMs' accuracy in following complex instructions
Addressing lazy reasoning during thinking stages
Enhancing generalizable reasoning with preview and self-checking
Innovation

Methods, ideas, or system contributions that make the work stand out.

Preview and self-checking for rigorous reasoning
Entropy-preserving supervised fine-tuning strategy
Token-wise entropy-adaptive reinforcement learning
🔎 Similar Papers
No similar papers found.
C
Chenyang Wang
Harbin Institute of Technology
L
Liang Wen
Qiyuan Tech
Shousheng Jia
Shousheng Jia
360
llmnlpdeep retrieval
Xiangzheng Zhang
Xiangzheng Zhang
360
AI safetyLarge language modelsInformation Retrieval
L
Liang Xu
CLUE