🤖 AI Summary
Reinforcement learning (RL) for instruction-following suffers from validation signals that lack interpretability and generalizability. Method: This paper proposes VerIF, the first systematic framework integrating dual-modal verification—rule-based code validation and language-based validation via the large reasoning model QwQ-32B—and introduces VerInstruct, a high-quality validation-augmented dataset of 22K samples. Crucially, verification signals are directly injected into PPO training without modifying the backbone model or compromising general-purpose capabilities. Contribution/Results: VerIF establishes a verification-driven, end-to-end RL optimization paradigm that balances formal rigor with semantic flexibility. Experiments demonstrate state-of-the-art performance on multiple mainstream instruction-following benchmarks at comparable scale, significantly improved generalization to unseen constraints, and seamless plug-and-play integration into existing training pipelines.
📝 Abstract
Reinforcement learning with verifiable rewards (RLVR) has become a key technique for enhancing large language models (LLMs), with verification engineering playing a central role. However, best practices for RL in instruction following remain underexplored. In this work, we explore the verification challenge in RL for instruction following and propose VerIF, a verification method that combines rule-based code verification with LLM-based verification from a large reasoning model (e.g., QwQ-32B). To support this approach, we construct a high-quality instruction-following dataset, VerInstruct, containing approximately 22,000 instances with associated verification signals. We apply RL training with VerIF to two models, achieving significant improvements across several representative instruction-following benchmarks. The trained models reach state-of-the-art performance among models of comparable size and generalize well to unseen constraints. We further observe that their general capabilities remain unaffected, suggesting that RL with VerIF can be integrated into existing RL recipes to enhance overall model performance. We have released our datasets, codes, and models to facilitate future research at https://github.com/THU-KEG/VerIF.