A Multi-Dimensional Constraint Framework for Evaluating and Improving Instruction Following in Large Language Models

📅 2025-05-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM instruction-following evaluations rely heavily on templated, constraint-based prompts, lacking realism and fine-grained characterization of semantic constraints. Method: We propose the first three-dimensional evaluation framework—spanning *generation modes*, *constraint categories*, and *difficulty levels*—incorporating three generation modes, four semantic constraint types, and four difficulty tiers. We design an automated instruction synthesis pipeline featuring constraint expansion, conflict detection, and rewriting, yielding 1,200 code-verifiable test instances. Contribution/Results: Comprehensive evaluation of 19 state-of-the-art models reveals a sharp performance drop (77.67% → 32.96%) as constraint difficulty increases. We identify attention module parameter tuning as a critical optimization pathway. Reinforcement learning fine-tuning guided by our framework significantly improves instruction adherence without compromising general capabilities.

Technology Category

Application Category

📝 Abstract
Instruction following evaluates large language models (LLMs) on their ability to generate outputs that adhere to user-defined constraints. However, existing benchmarks often rely on templated constraint prompts, which lack the diversity of real-world usage and limit fine-grained performance assessment. To fill this gap, we propose a multi-dimensional constraint framework encompassing three constraint patterns, four constraint categories, and four difficulty levels. Building on this framework, we develop an automated instruction generation pipeline that performs constraint expansion, conflict detection, and instruction rewriting, yielding 1,200 code-verifiable instruction-following test samples. We evaluate 19 LLMs across seven model families and uncover substantial variation in performance across constraint forms. For instance, average performance drops from 77.67% at Level I to 32.96% at Level IV. Furthermore, we demonstrate the utility of our approach by using it to generate data for reinforcement learning, achieving substantial gains in instruction following without degrading general performance. In-depth analysis indicates that these gains stem primarily from modifications in the model's attention modules parameters, which enhance constraint recognition and adherence. Code and data are available in https://github.com/Junjie-Ye/MulDimIF.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs' ability to follow diverse real-world constraints
Addressing limitations of templated benchmarks in performance assessment
Improving instruction following via multi-dimensional constraint framework
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-dimensional constraint framework for LLMs
Automated instruction generation pipeline
Reinforcement learning data for performance gains
🔎 Similar Papers
No similar papers found.
J
Junjie Ye
School of Computer Science, Fudan University
Caishuang Huang
Caishuang Huang
Fudan University
LLM、RLHF、Tool Learning
Z
Zhuohan Chen
School of Computer Science, Fudan University
Wenjie Fu
Wenjie Fu
Ph.D, Southeast University
VLSI design and test automation
Chenyuan Yang
Chenyuan Yang
University of Illinois Urbana-Champaign
System ReliabilityMachine Learning
L
Leyi Yang
School of Computer Science, Fudan University
Yilong Wu
Yilong Wu
Fudan University
Natural Language Processing
P
Peng Wang
Lenovo Research
M
Meng Zhou
Tencent
X
Xiaolong Yang
Tencent
T
Tao Gui
Institute of Modern Languages and Linguistics, Fudan University
Q
Qi Zhang
School of Computer Science, Fudan University
Z
Zhongchao Shi
Lenovo Research
Jianping Fan
Jianping Fan
AI Lab at Lenovo Research
AIComputer VisionMachine LearningQuantum Computing
X
Xuanjing Huang
School of Computer Science, Fudan University