RECAST: Strengthening LLMs' Complex Instruction Following with Constraint-Verifiable Data

📅 2025-05-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit markedly degraded constraint adherence when processing complex instructions involving more than ten explicit constraints. To address this, we propose the first verifiable data synthesis paradigm: a dual-modal automatic verification framework integrating a rule engine (for quantitative constraint enforcement) and an LLM-based validator (for qualitative constraint assessment). Leveraging this framework, we construct RECAST-30K—a high-fidelity, complex-instruction dataset comprising 30,000 instances spanning 15 distinct constraint categories. Using RECAST-30K, we develop constraint-driven synthetic data generation and reinforcement learning reward modeling, yielding substantial performance gains in constraint-following tasks—particularly under high-constraint-density conditions, where model robustness improves significantly. Our core contributions are (i) the first verifiable synthesis framework enabling rigorous constraint validation during data creation, and (ii) the first large-scale, fine-grained, multi-type constraint benchmark for instruction-following evaluation.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are increasingly expected to tackle complex tasks, driven by their expanding applications and users' growing proficiency in crafting sophisticated prompts. However, as the number of explicitly stated requirements increases (particularly more than 10 constraints), LLMs often struggle to accurately follow such complex instructions. To address this challenge, we propose RECAST, a novel framework for synthesizing datasets where each example incorporates far more constraints than those in existing benchmarks. These constraints are extracted from real-world prompt-response pairs to ensure practical relevance. RECAST enables automatic verification of constraint satisfaction via rule-based validators for quantitative constraints and LLM-based validators for qualitative ones. Using this framework, we construct RECAST-30K, a large-scale, high-quality dataset comprising 30k instances spanning 15 constraint types. Experimental results demonstrate that models fine-tuned on RECAST-30K show substantial improvements in following complex instructions. Moreover, the verifiability provided by RECAST enables the design of reward functions for reinforcement learning, which further boosts model performance on complex and challenging tasks.
Problem

Research questions and friction points this paper is trying to address.

LLMs struggle with complex instructions with many constraints
Need dataset for verifying constraint satisfaction automatically
Improve LLM performance on tasks with multiple constraints
Innovation

Methods, ideas, or system contributions that make the work stand out.

RECAST framework synthesizes high-constraint datasets
Automatic verification via rule-based and LLM validators
RECAST-30K dataset improves complex instruction following
🔎 Similar Papers
No similar papers found.
W
Wenhao Liu
Fudan University
Z
Zhengkang Guo
Fudan University
M
Mingchen Xie
Xiaohongshu Inc.
J
Jingwen Xu
Fudan University
Z
Zisu Huang
Fudan University
M
Muzhao Tian
Fudan University
Jianhan Xu
Jianhan Xu
Fudan University
Natural Language Processing
Muling Wu
Muling Wu
Fudan University
X
Xiaohua Wang
Fudan University
C
Changze Lv
Fudan University
He-Da Wang
He-Da Wang
Xiaohongshu Inc.
Recommender SystemsPattern RecognitionMachine Learning
H
Hu Yao
Xiaohongshu Inc.
Xiaoqing Zheng
Xiaoqing Zheng
Fudan University
Natural Language Processing and Machine Learning
X
Xuanjing Huang
Fudan University